Test Report: Docker_Linux_crio 21894

                    
                      8496c1ca7722bf7d926446d0df8cf9af55d7419f:2025-11-15:42336
                    
                

Test fail (37/328)

Order failed test Duration
29 TestAddons/serial/Volcano 0.26
35 TestAddons/parallel/Registry 16.85
36 TestAddons/parallel/RegistryCreds 0.79
37 TestAddons/parallel/Ingress 149.36
38 TestAddons/parallel/InspektorGadget 5.25
39 TestAddons/parallel/MetricsServer 5.32
41 TestAddons/parallel/CSI 35.44
42 TestAddons/parallel/Headlamp 2.61
43 TestAddons/parallel/CloudSpanner 5.25
44 TestAddons/parallel/LocalPath 10.14
45 TestAddons/parallel/NvidiaDevicePlugin 5.56
46 TestAddons/parallel/Yakd 5.25
47 TestAddons/parallel/AmdGpuDevicePlugin 5.25
97 TestFunctional/parallel/ServiceCmdConnect 602.95
125 TestFunctional/parallel/ServiceCmd/DeployApp 600.6
138 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.25
139 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.87
140 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.05
141 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.29
143 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.2
144 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.47
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.55
153 TestFunctional/parallel/ServiceCmd/Format 0.55
154 TestFunctional/parallel/ServiceCmd/URL 0.54
191 TestJSONOutput/pause/Command 2.14
197 TestJSONOutput/unpause/Command 1.44
285 TestPause/serial/Pause 7.38
341 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.44
349 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.43
359 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.23
363 TestStartStop/group/old-k8s-version/serial/Pause 5.42
371 TestStartStop/group/no-preload/serial/Pause 6.73
373 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.27
376 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.01
386 TestStartStop/group/newest-cni/serial/Pause 5.37
389 TestStartStop/group/embed-certs/serial/Pause 6.61
393 TestStartStop/group/default-k8s-diff-port/serial/Pause 5.16
x
+
TestAddons/serial/Volcano (0.26s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-209049 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-209049 addons disable volcano --alsologtostderr -v=1: exit status 11 (258.239749ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:43:53.532438   68737 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:43:53.532720   68737 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:43:53.532732   68737 out.go:374] Setting ErrFile to fd 2...
	I1115 09:43:53.532736   68737 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:43:53.532929   68737 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 09:43:53.533223   68737 mustload.go:66] Loading cluster: addons-209049
	I1115 09:43:53.533576   68737 config.go:182] Loaded profile config "addons-209049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:43:53.533597   68737 addons.go:607] checking whether the cluster is paused
	I1115 09:43:53.533680   68737 config.go:182] Loaded profile config "addons-209049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:43:53.533706   68737 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:43:53.534424   68737 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:43:53.553667   68737 ssh_runner.go:195] Run: systemctl --version
	I1115 09:43:53.553732   68737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:43:53.572176   68737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:43:53.664719   68737 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:43:53.664806   68737 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:43:53.695426   68737 cri.go:89] found id: "3caa65a862513e243b22e76e1bbf885e3ac037a176ca99b12f91d698216975db"
	I1115 09:43:53.695451   68737 cri.go:89] found id: "b0ff95e639d4d5eb2c0d77d21ada78f8cd1fb53b0f074a034a43d45e26991503"
	I1115 09:43:53.695455   68737 cri.go:89] found id: "c9bdb51e12a14483aa19c676698377a0b60b77720f0628965eea3278bbb8e041"
	I1115 09:43:53.695458   68737 cri.go:89] found id: "e7dcd097399e865e5ae5f0e729bdc0f1048db8b421e3cb038204849e0f017ce4"
	I1115 09:43:53.695460   68737 cri.go:89] found id: "acf4ca35c9f55a5b7ca511482b0cfe1fd19ad4da286e419734932ebc92d08987"
	I1115 09:43:53.695464   68737 cri.go:89] found id: "171c9fa6da7f829033739f1a599a437a0f9fb0490f6e35cb28121675ec6610d5"
	I1115 09:43:53.695466   68737 cri.go:89] found id: "ff456e58c6b5371ae6d2128ce2bd4b5688b2bdd7c3b3eb2b96101bb1b58fe2c5"
	I1115 09:43:53.695469   68737 cri.go:89] found id: "ee4946da5ae0da768ba560dc1b29498a8426405bac06d4d18f0a838d40c80725"
	I1115 09:43:53.695472   68737 cri.go:89] found id: "103636dd26caf3b9549cd406346b20c65628300e437e7be4616d3f0077c744bd"
	I1115 09:43:53.695477   68737 cri.go:89] found id: "49680c8d74f4f54e56ef4ffd5f2ca74f9034bbb45596e528bea07a9727288e4c"
	I1115 09:43:53.695480   68737 cri.go:89] found id: "b667435537d1ec9aa535058d15e290f6af371012bddf3d30095187b5c578a6f7"
	I1115 09:43:53.695482   68737 cri.go:89] found id: "0eebfaee45b609055247f2529eb805a5c5b268a2be72bc3dc72864bae3e6b98f"
	I1115 09:43:53.695484   68737 cri.go:89] found id: "2e0b28ec2dfa3079760cebaa9bb76189b77530f1910306d850dbf67dea10ec99"
	I1115 09:43:53.695487   68737 cri.go:89] found id: "4c734c004dda05124ee316d7d4e813486d10a56625af5fde238c99fe3c7fcb59"
	I1115 09:43:53.695490   68737 cri.go:89] found id: "db6577073b2a6a2e7ebbec11d5b9ea8189c70d1adfad3656b9c47577a4cda077"
	I1115 09:43:53.695494   68737 cri.go:89] found id: "ed655dc7f306b7a351e2adf5b80bffbf7072a9bb72cdb5eac89f029f44b5af1e"
	I1115 09:43:53.695497   68737 cri.go:89] found id: "abed161df6f25739be955e904c69c8cc237aee9607441efb5b85c298a8066bc3"
	I1115 09:43:53.695503   68737 cri.go:89] found id: "1bdef9117bea1f9beb91b7cd74c36b09d5d21005cb342d6d96771811af8c1a65"
	I1115 09:43:53.695505   68737 cri.go:89] found id: "c47933319f25ea439a895d8a214194c80c6c0e5be436f0cda95e520fb61fcc42"
	I1115 09:43:53.695508   68737 cri.go:89] found id: "bf4bdcddcb90f809aee08bb750d5e3282b1083556ad258032140c881f019a634"
	I1115 09:43:53.695517   68737 cri.go:89] found id: "2d0c6bfa456fdc1fae3dbb4e08aebd592ec5b3e4b99f82367927552be2506b4b"
	I1115 09:43:53.695520   68737 cri.go:89] found id: "fc273347a0fa0ac4a76f1735e0dc29501058daa141499c42e0703edf64d3cc86"
	I1115 09:43:53.695523   68737 cri.go:89] found id: "8f354571302cfa24ed2fcb9dc1a59908900201f2ee8a9f0985e0cd698988ceff"
	I1115 09:43:53.695526   68737 cri.go:89] found id: "1f26d41b1ae72e1dc34f54376e004475d2c4c414c8b70169e5095a83b0a7212d"
	I1115 09:43:53.695528   68737 cri.go:89] found id: ""
	I1115 09:43:53.695576   68737 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:43:53.710542   68737 out.go:203] 
	W1115 09:43:53.712057   68737 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:43:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:43:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:43:53.712077   68737 out.go:285] * 
	* 
	W1115 09:43:53.716574   68737 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:43:53.718214   68737 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-209049 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.26s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.355553ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-fwbg5" [83b0e192-16e8-40e2-9ff5-a5964957dc8a] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002853582s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-xzbqg" [bcc22f7e-dc0a-4976-9707-3acbdc5421f3] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003326208s
addons_test.go:392: (dbg) Run:  kubectl --context addons-209049 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-209049 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-209049 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.1764525s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-209049 ip
2025/11/15 09:44:22 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-209049 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-209049 addons disable registry --alsologtostderr -v=1: exit status 11 (462.422037ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:44:23.021502   71384 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:44:23.022011   71384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:44:23.022028   71384 out.go:374] Setting ErrFile to fd 2...
	I1115 09:44:23.022035   71384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:44:23.022514   71384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 09:44:23.023221   71384 mustload.go:66] Loading cluster: addons-209049
	I1115 09:44:23.023629   71384 config.go:182] Loaded profile config "addons-209049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:44:23.023648   71384 addons.go:607] checking whether the cluster is paused
	I1115 09:44:23.023757   71384 config.go:182] Loaded profile config "addons-209049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:44:23.023772   71384 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:44:23.024213   71384 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:44:23.042399   71384 ssh_runner.go:195] Run: systemctl --version
	I1115 09:44:23.042453   71384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:44:23.061025   71384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:44:23.152584   71384 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:44:23.152659   71384 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:44:23.185354   71384 cri.go:89] found id: "3caa65a862513e243b22e76e1bbf885e3ac037a176ca99b12f91d698216975db"
	I1115 09:44:23.185384   71384 cri.go:89] found id: "b0ff95e639d4d5eb2c0d77d21ada78f8cd1fb53b0f074a034a43d45e26991503"
	I1115 09:44:23.185388   71384 cri.go:89] found id: "c9bdb51e12a14483aa19c676698377a0b60b77720f0628965eea3278bbb8e041"
	I1115 09:44:23.185392   71384 cri.go:89] found id: "e7dcd097399e865e5ae5f0e729bdc0f1048db8b421e3cb038204849e0f017ce4"
	I1115 09:44:23.185394   71384 cri.go:89] found id: "acf4ca35c9f55a5b7ca511482b0cfe1fd19ad4da286e419734932ebc92d08987"
	I1115 09:44:23.185397   71384 cri.go:89] found id: "171c9fa6da7f829033739f1a599a437a0f9fb0490f6e35cb28121675ec6610d5"
	I1115 09:44:23.185400   71384 cri.go:89] found id: "ff456e58c6b5371ae6d2128ce2bd4b5688b2bdd7c3b3eb2b96101bb1b58fe2c5"
	I1115 09:44:23.185408   71384 cri.go:89] found id: "ee4946da5ae0da768ba560dc1b29498a8426405bac06d4d18f0a838d40c80725"
	I1115 09:44:23.185411   71384 cri.go:89] found id: "103636dd26caf3b9549cd406346b20c65628300e437e7be4616d3f0077c744bd"
	I1115 09:44:23.185419   71384 cri.go:89] found id: "49680c8d74f4f54e56ef4ffd5f2ca74f9034bbb45596e528bea07a9727288e4c"
	I1115 09:44:23.185423   71384 cri.go:89] found id: "b667435537d1ec9aa535058d15e290f6af371012bddf3d30095187b5c578a6f7"
	I1115 09:44:23.185427   71384 cri.go:89] found id: "0eebfaee45b609055247f2529eb805a5c5b268a2be72bc3dc72864bae3e6b98f"
	I1115 09:44:23.185431   71384 cri.go:89] found id: "2e0b28ec2dfa3079760cebaa9bb76189b77530f1910306d850dbf67dea10ec99"
	I1115 09:44:23.185435   71384 cri.go:89] found id: "4c734c004dda05124ee316d7d4e813486d10a56625af5fde238c99fe3c7fcb59"
	I1115 09:44:23.185439   71384 cri.go:89] found id: "db6577073b2a6a2e7ebbec11d5b9ea8189c70d1adfad3656b9c47577a4cda077"
	I1115 09:44:23.185457   71384 cri.go:89] found id: "ed655dc7f306b7a351e2adf5b80bffbf7072a9bb72cdb5eac89f029f44b5af1e"
	I1115 09:44:23.185464   71384 cri.go:89] found id: "abed161df6f25739be955e904c69c8cc237aee9607441efb5b85c298a8066bc3"
	I1115 09:44:23.185470   71384 cri.go:89] found id: "1bdef9117bea1f9beb91b7cd74c36b09d5d21005cb342d6d96771811af8c1a65"
	I1115 09:44:23.185472   71384 cri.go:89] found id: "c47933319f25ea439a895d8a214194c80c6c0e5be436f0cda95e520fb61fcc42"
	I1115 09:44:23.185474   71384 cri.go:89] found id: "bf4bdcddcb90f809aee08bb750d5e3282b1083556ad258032140c881f019a634"
	I1115 09:44:23.185477   71384 cri.go:89] found id: "2d0c6bfa456fdc1fae3dbb4e08aebd592ec5b3e4b99f82367927552be2506b4b"
	I1115 09:44:23.185479   71384 cri.go:89] found id: "fc273347a0fa0ac4a76f1735e0dc29501058daa141499c42e0703edf64d3cc86"
	I1115 09:44:23.185481   71384 cri.go:89] found id: "8f354571302cfa24ed2fcb9dc1a59908900201f2ee8a9f0985e0cd698988ceff"
	I1115 09:44:23.185484   71384 cri.go:89] found id: "1f26d41b1ae72e1dc34f54376e004475d2c4c414c8b70169e5095a83b0a7212d"
	I1115 09:44:23.185486   71384 cri.go:89] found id: ""
	I1115 09:44:23.185536   71384 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:44:23.249749   71384 out.go:203] 
	W1115 09:44:23.291829   71384 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:44:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:44:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:44:23.291858   71384 out.go:285] * 
	* 
	W1115 09:44:23.299482   71384 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:44:23.422261   71384 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-209049 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (16.85s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.79s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.353493ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-209049
addons_test.go:332: (dbg) Run:  kubectl --context addons-209049 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-209049 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-209049 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (532.240269ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:44:23.752510   71788 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:44:23.752775   71788 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:44:23.752785   71788 out.go:374] Setting ErrFile to fd 2...
	I1115 09:44:23.752789   71788 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:44:23.753016   71788 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 09:44:23.753285   71788 mustload.go:66] Loading cluster: addons-209049
	I1115 09:44:23.753617   71788 config.go:182] Loaded profile config "addons-209049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:44:23.753632   71788 addons.go:607] checking whether the cluster is paused
	I1115 09:44:23.753711   71788 config.go:182] Loaded profile config "addons-209049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:44:23.753722   71788 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:44:23.754109   71788 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:44:23.771940   71788 ssh_runner.go:195] Run: systemctl --version
	I1115 09:44:23.772028   71788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:44:23.796099   71788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:44:24.080552   71788 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:44:24.080659   71788 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:44:24.188619   71788 cri.go:89] found id: "3caa65a862513e243b22e76e1bbf885e3ac037a176ca99b12f91d698216975db"
	I1115 09:44:24.188720   71788 cri.go:89] found id: "b0ff95e639d4d5eb2c0d77d21ada78f8cd1fb53b0f074a034a43d45e26991503"
	I1115 09:44:24.188775   71788 cri.go:89] found id: "c9bdb51e12a14483aa19c676698377a0b60b77720f0628965eea3278bbb8e041"
	I1115 09:44:24.188784   71788 cri.go:89] found id: "e7dcd097399e865e5ae5f0e729bdc0f1048db8b421e3cb038204849e0f017ce4"
	I1115 09:44:24.188789   71788 cri.go:89] found id: "acf4ca35c9f55a5b7ca511482b0cfe1fd19ad4da286e419734932ebc92d08987"
	I1115 09:44:24.188797   71788 cri.go:89] found id: "171c9fa6da7f829033739f1a599a437a0f9fb0490f6e35cb28121675ec6610d5"
	I1115 09:44:24.188803   71788 cri.go:89] found id: "ff456e58c6b5371ae6d2128ce2bd4b5688b2bdd7c3b3eb2b96101bb1b58fe2c5"
	I1115 09:44:24.188812   71788 cri.go:89] found id: "ee4946da5ae0da768ba560dc1b29498a8426405bac06d4d18f0a838d40c80725"
	I1115 09:44:24.188817   71788 cri.go:89] found id: "103636dd26caf3b9549cd406346b20c65628300e437e7be4616d3f0077c744bd"
	I1115 09:44:24.188864   71788 cri.go:89] found id: "49680c8d74f4f54e56ef4ffd5f2ca74f9034bbb45596e528bea07a9727288e4c"
	I1115 09:44:24.188871   71788 cri.go:89] found id: "b667435537d1ec9aa535058d15e290f6af371012bddf3d30095187b5c578a6f7"
	I1115 09:44:24.188874   71788 cri.go:89] found id: "0eebfaee45b609055247f2529eb805a5c5b268a2be72bc3dc72864bae3e6b98f"
	I1115 09:44:24.188877   71788 cri.go:89] found id: "2e0b28ec2dfa3079760cebaa9bb76189b77530f1910306d850dbf67dea10ec99"
	I1115 09:44:24.188879   71788 cri.go:89] found id: "4c734c004dda05124ee316d7d4e813486d10a56625af5fde238c99fe3c7fcb59"
	I1115 09:44:24.188882   71788 cri.go:89] found id: "db6577073b2a6a2e7ebbec11d5b9ea8189c70d1adfad3656b9c47577a4cda077"
	I1115 09:44:24.188886   71788 cri.go:89] found id: "ed655dc7f306b7a351e2adf5b80bffbf7072a9bb72cdb5eac89f029f44b5af1e"
	I1115 09:44:24.188888   71788 cri.go:89] found id: "abed161df6f25739be955e904c69c8cc237aee9607441efb5b85c298a8066bc3"
	I1115 09:44:24.188893   71788 cri.go:89] found id: "1bdef9117bea1f9beb91b7cd74c36b09d5d21005cb342d6d96771811af8c1a65"
	I1115 09:44:24.188896   71788 cri.go:89] found id: "c47933319f25ea439a895d8a214194c80c6c0e5be436f0cda95e520fb61fcc42"
	I1115 09:44:24.188923   71788 cri.go:89] found id: "bf4bdcddcb90f809aee08bb750d5e3282b1083556ad258032140c881f019a634"
	I1115 09:44:24.188930   71788 cri.go:89] found id: "2d0c6bfa456fdc1fae3dbb4e08aebd592ec5b3e4b99f82367927552be2506b4b"
	I1115 09:44:24.188934   71788 cri.go:89] found id: "fc273347a0fa0ac4a76f1735e0dc29501058daa141499c42e0703edf64d3cc86"
	I1115 09:44:24.188938   71788 cri.go:89] found id: "8f354571302cfa24ed2fcb9dc1a59908900201f2ee8a9f0985e0cd698988ceff"
	I1115 09:44:24.188941   71788 cri.go:89] found id: "1f26d41b1ae72e1dc34f54376e004475d2c4c414c8b70169e5095a83b0a7212d"
	I1115 09:44:24.188946   71788 cri.go:89] found id: ""
	I1115 09:44:24.189018   71788 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:44:24.206882   71788 out.go:203] 
	W1115 09:44:24.208748   71788 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:44:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:44:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:44:24.208783   71788 out.go:285] * 
	* 
	W1115 09:44:24.214026   71788 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:44:24.215557   71788 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-209049 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.79s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (149.36s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-209049 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-209049 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-209049 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [8b947b04-3a30-4c82-90a1-681a4e8dbade] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [8b947b04-3a30-4c82-90a1-681a4e8dbade] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.003459727s
I1115 09:44:32.194127   58962 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-209049 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-209049 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m13.814481307s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-209049 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-209049 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-209049
helpers_test.go:243: (dbg) docker inspect addons-209049:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "95837a795344356a1a114ddf87e92b007ae01ddbe0deac5a406c954fdd9cec8c",
	        "Created": "2025-11-15T09:41:31.042910437Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 61080,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T09:41:31.079824221Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/95837a795344356a1a114ddf87e92b007ae01ddbe0deac5a406c954fdd9cec8c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/95837a795344356a1a114ddf87e92b007ae01ddbe0deac5a406c954fdd9cec8c/hostname",
	        "HostsPath": "/var/lib/docker/containers/95837a795344356a1a114ddf87e92b007ae01ddbe0deac5a406c954fdd9cec8c/hosts",
	        "LogPath": "/var/lib/docker/containers/95837a795344356a1a114ddf87e92b007ae01ddbe0deac5a406c954fdd9cec8c/95837a795344356a1a114ddf87e92b007ae01ddbe0deac5a406c954fdd9cec8c-json.log",
	        "Name": "/addons-209049",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-209049:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-209049",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "95837a795344356a1a114ddf87e92b007ae01ddbe0deac5a406c954fdd9cec8c",
	                "LowerDir": "/var/lib/docker/overlay2/0bea74c9d174af12d0fdc1cf6c7f4a454922325e50efe47725cba729a0727358-init/diff:/var/lib/docker/overlay2/507c85b6fb43d9c98216fac79d7a8d08dd20b2a63d0fbfc758336e5d38c04044/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0bea74c9d174af12d0fdc1cf6c7f4a454922325e50efe47725cba729a0727358/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0bea74c9d174af12d0fdc1cf6c7f4a454922325e50efe47725cba729a0727358/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0bea74c9d174af12d0fdc1cf6c7f4a454922325e50efe47725cba729a0727358/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-209049",
	                "Source": "/var/lib/docker/volumes/addons-209049/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-209049",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-209049",
	                "name.minikube.sigs.k8s.io": "addons-209049",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "06e2c71b6cbccd1d4c51f6c3805feb68e36ce9441eb0040f47d0ee0bc8c38a66",
	            "SandboxKey": "/var/run/docker/netns/06e2c71b6cbc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-209049": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "26a7626725f3ddc9991ec6ab765481cf6dae3a4fcf9d12ac6a76dd599e86b571",
	                    "EndpointID": "b62a20aea2e494f8b7019f1b0691b31fd48128bf365f9c97203b08d51b7319b5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "8e:ac:c7:05:09:62",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-209049",
	                        "95837a795344"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-209049 -n addons-209049
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-209049 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-209049 logs -n 25: (1.144741111s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-602120 --alsologtostderr --binary-mirror http://127.0.0.1:40137 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-602120 │ jenkins │ v1.37.0 │ 15 Nov 25 09:41 UTC │                     │
	│ delete  │ -p binary-mirror-602120                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-602120 │ jenkins │ v1.37.0 │ 15 Nov 25 09:41 UTC │ 15 Nov 25 09:41 UTC │
	│ addons  │ disable dashboard -p addons-209049                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-209049        │ jenkins │ v1.37.0 │ 15 Nov 25 09:41 UTC │                     │
	│ addons  │ enable dashboard -p addons-209049                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-209049        │ jenkins │ v1.37.0 │ 15 Nov 25 09:41 UTC │                     │
	│ start   │ -p addons-209049 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-209049        │ jenkins │ v1.37.0 │ 15 Nov 25 09:41 UTC │ 15 Nov 25 09:43 UTC │
	│ addons  │ addons-209049 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-209049        │ jenkins │ v1.37.0 │ 15 Nov 25 09:43 UTC │                     │
	│ addons  │ addons-209049 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-209049        │ jenkins │ v1.37.0 │ 15 Nov 25 09:44 UTC │                     │
	│ addons  │ enable headlamp -p addons-209049 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-209049        │ jenkins │ v1.37.0 │ 15 Nov 25 09:44 UTC │                     │
	│ addons  │ addons-209049 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-209049        │ jenkins │ v1.37.0 │ 15 Nov 25 09:44 UTC │                     │
	│ addons  │ addons-209049 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-209049        │ jenkins │ v1.37.0 │ 15 Nov 25 09:44 UTC │                     │
	│ addons  │ addons-209049 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-209049        │ jenkins │ v1.37.0 │ 15 Nov 25 09:44 UTC │                     │
	│ ssh     │ addons-209049 ssh cat /opt/local-path-provisioner/pvc-8bd24f7f-8ecb-40a6-a860-d94dacd37833_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-209049        │ jenkins │ v1.37.0 │ 15 Nov 25 09:44 UTC │ 15 Nov 25 09:44 UTC │
	│ addons  │ addons-209049 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-209049        │ jenkins │ v1.37.0 │ 15 Nov 25 09:44 UTC │                     │
	│ addons  │ addons-209049 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-209049        │ jenkins │ v1.37.0 │ 15 Nov 25 09:44 UTC │                     │
	│ addons  │ addons-209049 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-209049        │ jenkins │ v1.37.0 │ 15 Nov 25 09:44 UTC │                     │
	│ addons  │ addons-209049 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-209049        │ jenkins │ v1.37.0 │ 15 Nov 25 09:44 UTC │                     │
	│ ip      │ addons-209049 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-209049        │ jenkins │ v1.37.0 │ 15 Nov 25 09:44 UTC │ 15 Nov 25 09:44 UTC │
	│ addons  │ addons-209049 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-209049        │ jenkins │ v1.37.0 │ 15 Nov 25 09:44 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-209049                                                                                                                                                                                                                                                                                                                                                                                           │ addons-209049        │ jenkins │ v1.37.0 │ 15 Nov 25 09:44 UTC │ 15 Nov 25 09:44 UTC │
	│ addons  │ addons-209049 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-209049        │ jenkins │ v1.37.0 │ 15 Nov 25 09:44 UTC │                     │
	│ addons  │ addons-209049 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-209049        │ jenkins │ v1.37.0 │ 15 Nov 25 09:44 UTC │                     │
	│ ssh     │ addons-209049 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-209049        │ jenkins │ v1.37.0 │ 15 Nov 25 09:44 UTC │                     │
	│ addons  │ addons-209049 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-209049        │ jenkins │ v1.37.0 │ 15 Nov 25 09:44 UTC │                     │
	│ addons  │ addons-209049 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-209049        │ jenkins │ v1.37.0 │ 15 Nov 25 09:44 UTC │                     │
	│ ip      │ addons-209049 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-209049        │ jenkins │ v1.37.0 │ 15 Nov 25 09:46 UTC │ 15 Nov 25 09:46 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 09:41:06
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 09:41:06.071042   60422 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:41:06.071323   60422 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:41:06.071333   60422 out.go:374] Setting ErrFile to fd 2...
	I1115 09:41:06.071337   60422 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:41:06.071555   60422 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 09:41:06.072108   60422 out.go:368] Setting JSON to false
	I1115 09:41:06.072942   60422 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":5003,"bootTime":1763194663,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 09:41:06.073070   60422 start.go:143] virtualization: kvm guest
	I1115 09:41:06.075011   60422 out.go:179] * [addons-209049] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 09:41:06.076040   60422 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 09:41:06.076038   60422 notify.go:221] Checking for updates...
	I1115 09:41:06.077039   60422 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:41:06.078187   60422 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 09:41:06.079255   60422 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-55448/.minikube
	I1115 09:41:06.080197   60422 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 09:41:06.081325   60422 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 09:41:06.082536   60422 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:41:06.109449   60422 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 09:41:06.109555   60422 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:41:06.165866   60422 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:46 SystemTime:2025-11-15 09:41:06.156299852 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:41:06.165998   60422 docker.go:319] overlay module found
	I1115 09:41:06.167669   60422 out.go:179] * Using the docker driver based on user configuration
	I1115 09:41:06.168838   60422 start.go:309] selected driver: docker
	I1115 09:41:06.168853   60422 start.go:930] validating driver "docker" against <nil>
	I1115 09:41:06.168865   60422 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 09:41:06.169771   60422 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:41:06.229504   60422 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:46 SystemTime:2025-11-15 09:41:06.219603445 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:41:06.229683   60422 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 09:41:06.229940   60422 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 09:41:06.231671   60422 out.go:179] * Using Docker driver with root privileges
	I1115 09:41:06.232772   60422 cni.go:84] Creating CNI manager for ""
	I1115 09:41:06.232851   60422 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 09:41:06.232879   60422 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 09:41:06.233002   60422 start.go:353] cluster config:
	{Name:addons-209049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-209049 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1115 09:41:06.234237   60422 out.go:179] * Starting "addons-209049" primary control-plane node in "addons-209049" cluster
	I1115 09:41:06.235265   60422 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 09:41:06.236297   60422 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 09:41:06.237250   60422 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:41:06.237280   60422 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1115 09:41:06.237294   60422 cache.go:65] Caching tarball of preloaded images
	I1115 09:41:06.237328   60422 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 09:41:06.237402   60422 preload.go:238] Found /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 09:41:06.237417   60422 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 09:41:06.237758   60422 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/config.json ...
	I1115 09:41:06.237794   60422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/config.json: {Name:mkdbd1a6c4c4edb33badfd696396c451aa16190d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:41:06.254555   60422 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1115 09:41:06.254723   60422 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory
	I1115 09:41:06.254747   60422 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory, skipping pull
	I1115 09:41:06.254756   60422 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in cache, skipping pull
	I1115 09:41:06.254770   60422 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 as a tarball
	I1115 09:41:06.254798   60422 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 from local cache
	I1115 09:41:21.221530   60422 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 from cached tarball
	I1115 09:41:21.221569   60422 cache.go:243] Successfully downloaded all kic artifacts
	I1115 09:41:21.221634   60422 start.go:360] acquireMachinesLock for addons-209049: {Name:mk6ef50958e20619f4fabbc8361c602d26a1aa95 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:41:21.221744   60422 start.go:364] duration metric: took 88.34µs to acquireMachinesLock for "addons-209049"
	I1115 09:41:21.221783   60422 start.go:93] Provisioning new machine with config: &{Name:addons-209049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-209049 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 09:41:21.221852   60422 start.go:125] createHost starting for "" (driver="docker")
	I1115 09:41:21.224309   60422 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1115 09:41:21.224630   60422 start.go:159] libmachine.API.Create for "addons-209049" (driver="docker")
	I1115 09:41:21.224674   60422 client.go:173] LocalClient.Create starting
	I1115 09:41:21.224836   60422 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem
	I1115 09:41:21.546695   60422 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem
	I1115 09:41:21.774293   60422 cli_runner.go:164] Run: docker network inspect addons-209049 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 09:41:21.792453   60422 cli_runner.go:211] docker network inspect addons-209049 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 09:41:21.792538   60422 network_create.go:284] running [docker network inspect addons-209049] to gather additional debugging logs...
	I1115 09:41:21.792561   60422 cli_runner.go:164] Run: docker network inspect addons-209049
	W1115 09:41:21.808812   60422 cli_runner.go:211] docker network inspect addons-209049 returned with exit code 1
	I1115 09:41:21.808844   60422 network_create.go:287] error running [docker network inspect addons-209049]: docker network inspect addons-209049: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-209049 not found
	I1115 09:41:21.808874   60422 network_create.go:289] output of [docker network inspect addons-209049]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-209049 not found
	
	** /stderr **
	I1115 09:41:21.808998   60422 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 09:41:21.826008   60422 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0015c2a60}
	I1115 09:41:21.826058   60422 network_create.go:124] attempt to create docker network addons-209049 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1115 09:41:21.826111   60422 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-209049 addons-209049
	I1115 09:41:21.871741   60422 network_create.go:108] docker network addons-209049 192.168.49.0/24 created
	I1115 09:41:21.871798   60422 kic.go:121] calculated static IP "192.168.49.2" for the "addons-209049" container
	I1115 09:41:21.871913   60422 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 09:41:21.887163   60422 cli_runner.go:164] Run: docker volume create addons-209049 --label name.minikube.sigs.k8s.io=addons-209049 --label created_by.minikube.sigs.k8s.io=true
	I1115 09:41:21.904141   60422 oci.go:103] Successfully created a docker volume addons-209049
	I1115 09:41:21.904228   60422 cli_runner.go:164] Run: docker run --rm --name addons-209049-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-209049 --entrypoint /usr/bin/test -v addons-209049:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 09:41:26.755019   60422 cli_runner.go:217] Completed: docker run --rm --name addons-209049-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-209049 --entrypoint /usr/bin/test -v addons-209049:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib: (4.850742883s)
	I1115 09:41:26.755056   60422 oci.go:107] Successfully prepared a docker volume addons-209049
	I1115 09:41:26.755097   60422 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:41:26.755113   60422 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 09:41:26.755187   60422 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-209049:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1115 09:41:30.970456   60422 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-209049:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.215224582s)
	I1115 09:41:30.970492   60422 kic.go:203] duration metric: took 4.215377053s to extract preloaded images to volume ...
	W1115 09:41:30.970624   60422 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1115 09:41:30.970732   60422 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 09:41:31.027186   60422 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-209049 --name addons-209049 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-209049 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-209049 --network addons-209049 --ip 192.168.49.2 --volume addons-209049:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 09:41:31.339847   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Running}}
	I1115 09:41:31.358737   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:31.376974   60422 cli_runner.go:164] Run: docker exec addons-209049 stat /var/lib/dpkg/alternatives/iptables
	I1115 09:41:31.422252   60422 oci.go:144] the created container "addons-209049" has a running status.
	I1115 09:41:31.422292   60422 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa...
	I1115 09:41:31.512400   60422 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 09:41:31.538714   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:31.557614   60422 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 09:41:31.557638   60422 kic_runner.go:114] Args: [docker exec --privileged addons-209049 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 09:41:31.600867   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:31.619399   60422 machine.go:94] provisionDockerMachine start ...
	I1115 09:41:31.619486   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:31.642785   60422 main.go:143] libmachine: Using SSH client type: native
	I1115 09:41:31.643154   60422 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1115 09:41:31.643176   60422 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 09:41:31.644474   60422 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48494->127.0.0.1:32768: read: connection reset by peer
	I1115 09:41:34.770904   60422 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-209049
	
	I1115 09:41:34.770938   60422 ubuntu.go:182] provisioning hostname "addons-209049"
	I1115 09:41:34.771017   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:34.789313   60422 main.go:143] libmachine: Using SSH client type: native
	I1115 09:41:34.789535   60422 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1115 09:41:34.789548   60422 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-209049 && echo "addons-209049" | sudo tee /etc/hostname
	I1115 09:41:34.924771   60422 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-209049
	
	I1115 09:41:34.924859   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:34.942941   60422 main.go:143] libmachine: Using SSH client type: native
	I1115 09:41:34.943193   60422 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1115 09:41:34.943212   60422 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-209049' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-209049/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-209049' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 09:41:35.069588   60422 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 09:41:35.069615   60422 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-55448/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-55448/.minikube}
	I1115 09:41:35.069667   60422 ubuntu.go:190] setting up certificates
	I1115 09:41:35.069685   60422 provision.go:84] configureAuth start
	I1115 09:41:35.069743   60422 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-209049
	I1115 09:41:35.086965   60422 provision.go:143] copyHostCerts
	I1115 09:41:35.087042   60422 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem (1082 bytes)
	I1115 09:41:35.087182   60422 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem (1123 bytes)
	I1115 09:41:35.087244   60422 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem (1679 bytes)
	I1115 09:41:35.087292   60422 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem org=jenkins.addons-209049 san=[127.0.0.1 192.168.49.2 addons-209049 localhost minikube]
	I1115 09:41:35.131093   60422 provision.go:177] copyRemoteCerts
	I1115 09:41:35.131159   60422 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 09:41:35.131200   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:35.148327   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:41:35.242277   60422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 09:41:35.261320   60422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1115 09:41:35.278296   60422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1115 09:41:35.294483   60422 provision.go:87] duration metric: took 224.780726ms to configureAuth
	I1115 09:41:35.294509   60422 ubuntu.go:206] setting minikube options for container-runtime
	I1115 09:41:35.294707   60422 config.go:182] Loaded profile config "addons-209049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:41:35.294829   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:35.311911   60422 main.go:143] libmachine: Using SSH client type: native
	I1115 09:41:35.312155   60422 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1115 09:41:35.312173   60422 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 09:41:35.545004   60422 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 09:41:35.545030   60422 machine.go:97] duration metric: took 3.925607619s to provisionDockerMachine
	I1115 09:41:35.545041   60422 client.go:176] duration metric: took 14.320358011s to LocalClient.Create
	I1115 09:41:35.545060   60422 start.go:167] duration metric: took 14.32043594s to libmachine.API.Create "addons-209049"
	I1115 09:41:35.545069   60422 start.go:293] postStartSetup for "addons-209049" (driver="docker")
	I1115 09:41:35.545077   60422 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 09:41:35.545128   60422 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 09:41:35.545164   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:35.563905   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:41:35.659052   60422 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 09:41:35.662540   60422 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 09:41:35.662572   60422 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 09:41:35.662585   60422 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/addons for local assets ...
	I1115 09:41:35.662651   60422 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/files for local assets ...
	I1115 09:41:35.662686   60422 start.go:296] duration metric: took 117.610467ms for postStartSetup
	I1115 09:41:35.662979   60422 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-209049
	I1115 09:41:35.680394   60422 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/config.json ...
	I1115 09:41:35.680665   60422 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:41:35.680715   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:35.697414   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:41:35.787054   60422 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 09:41:35.791690   60422 start.go:128] duration metric: took 14.569824395s to createHost
	I1115 09:41:35.791719   60422 start.go:83] releasing machines lock for "addons-209049", held for 14.569959309s
	I1115 09:41:35.791803   60422 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-209049
	I1115 09:41:35.809103   60422 ssh_runner.go:195] Run: cat /version.json
	I1115 09:41:35.809168   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:35.809224   60422 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 09:41:35.809296   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:35.827715   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:41:35.828036   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:41:35.973395   60422 ssh_runner.go:195] Run: systemctl --version
	I1115 09:41:35.979728   60422 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 09:41:36.014105   60422 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 09:41:36.018631   60422 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 09:41:36.018683   60422 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 09:41:36.043631   60422 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1115 09:41:36.043656   60422 start.go:496] detecting cgroup driver to use...
	I1115 09:41:36.043696   60422 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 09:41:36.043766   60422 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 09:41:36.059521   60422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 09:41:36.071405   60422 docker.go:218] disabling cri-docker service (if available) ...
	I1115 09:41:36.071480   60422 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 09:41:36.087539   60422 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 09:41:36.104320   60422 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 09:41:36.183377   60422 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 09:41:36.271425   60422 docker.go:234] disabling docker service ...
	I1115 09:41:36.271494   60422 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 09:41:36.290157   60422 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 09:41:36.302345   60422 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 09:41:36.382297   60422 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 09:41:36.462445   60422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 09:41:36.474700   60422 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 09:41:36.488142   60422 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 09:41:36.488201   60422 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:41:36.497835   60422 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 09:41:36.497901   60422 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:41:36.506231   60422 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:41:36.514407   60422 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:41:36.522851   60422 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 09:41:36.530580   60422 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:41:36.538738   60422 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:41:36.551707   60422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:41:36.560099   60422 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 09:41:36.567189   60422 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1115 09:41:36.567236   60422 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1115 09:41:36.579758   60422 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 09:41:36.587718   60422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:41:36.665800   60422 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 09:41:36.770176   60422 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 09:41:36.770267   60422 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 09:41:36.774241   60422 start.go:564] Will wait 60s for crictl version
	I1115 09:41:36.774306   60422 ssh_runner.go:195] Run: which crictl
	I1115 09:41:36.777885   60422 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 09:41:36.801691   60422 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 09:41:36.801788   60422 ssh_runner.go:195] Run: crio --version
	I1115 09:41:36.828550   60422 ssh_runner.go:195] Run: crio --version
	I1115 09:41:36.857364   60422 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 09:41:36.858532   60422 cli_runner.go:164] Run: docker network inspect addons-209049 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 09:41:36.876200   60422 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1115 09:41:36.880333   60422 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:41:36.890204   60422 kubeadm.go:884] updating cluster {Name:addons-209049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-209049 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 09:41:36.890339   60422 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:41:36.890398   60422 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 09:41:36.920882   60422 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 09:41:36.920904   60422 crio.go:433] Images already preloaded, skipping extraction
	I1115 09:41:36.920965   60422 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 09:41:36.946346   60422 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 09:41:36.946371   60422 cache_images.go:86] Images are preloaded, skipping loading
	I1115 09:41:36.946378   60422 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1115 09:41:36.946499   60422 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-209049 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-209049 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 09:41:36.946573   60422 ssh_runner.go:195] Run: crio config
	I1115 09:41:36.991230   60422 cni.go:84] Creating CNI manager for ""
	I1115 09:41:36.991254   60422 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 09:41:36.991273   60422 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 09:41:36.991299   60422 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-209049 NodeName:addons-209049 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 09:41:36.991437   60422 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-209049"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 09:41:36.991510   60422 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 09:41:36.999499   60422 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 09:41:36.999562   60422 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 09:41:37.007024   60422 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1115 09:41:37.019032   60422 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 09:41:37.033364   60422 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1115 09:41:37.045332   60422 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1115 09:41:37.048872   60422 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:41:37.058308   60422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:41:37.135472   60422 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:41:37.162540   60422 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049 for IP: 192.168.49.2
	I1115 09:41:37.162563   60422 certs.go:195] generating shared ca certs ...
	I1115 09:41:37.162584   60422 certs.go:227] acquiring lock for ca certs: {Name:mkec8c0ab2b564f4fafe1f7fc86089efa994f6db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:41:37.162747   60422 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key
	I1115 09:41:37.672914   60422 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt ...
	I1115 09:41:37.672959   60422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt: {Name:mk79b8053ded3a30f80aec48e33e9cc288cf87ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:41:37.673176   60422 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key ...
	I1115 09:41:37.673193   60422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key: {Name:mk23e892d9a40c4ce81a499215fe2d80e5697a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:41:37.673307   60422 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key
	I1115 09:41:37.781907   60422 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.crt ...
	I1115 09:41:37.781940   60422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.crt: {Name:mk3468db0c2dcd84d5f98fba6ac770ec71e6a9a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:41:37.782154   60422 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key ...
	I1115 09:41:37.782174   60422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key: {Name:mk3f23db79dc681ebc09dd50cd76cdc2cd124f5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:41:37.782285   60422 certs.go:257] generating profile certs ...
	I1115 09:41:37.782367   60422 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/client.key
	I1115 09:41:37.782390   60422 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/client.crt with IP's: []
	I1115 09:41:37.798013   60422 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/client.crt ...
	I1115 09:41:37.798033   60422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/client.crt: {Name:mk9f8a06e150e4ab615cfeb860a4df3cd046bcd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:41:37.798186   60422 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/client.key ...
	I1115 09:41:37.798200   60422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/client.key: {Name:mke2a2122d4c3934eb296ff2f0d02b9e826c7efe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:41:37.798296   60422 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/apiserver.key.294b6e2f
	I1115 09:41:37.798316   60422 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/apiserver.crt.294b6e2f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1115 09:41:38.338641   60422 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/apiserver.crt.294b6e2f ...
	I1115 09:41:38.338680   60422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/apiserver.crt.294b6e2f: {Name:mk7a081e4c61e7aa3e99ef74af10d2ab8744cf45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:41:38.338908   60422 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/apiserver.key.294b6e2f ...
	I1115 09:41:38.338929   60422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/apiserver.key.294b6e2f: {Name:mk337f05ec6a0f43baacca048181e76f39edf618 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:41:38.339074   60422 certs.go:382] copying /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/apiserver.crt.294b6e2f -> /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/apiserver.crt
	I1115 09:41:38.339182   60422 certs.go:386] copying /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/apiserver.key.294b6e2f -> /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/apiserver.key
	I1115 09:41:38.339243   60422 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/proxy-client.key
	I1115 09:41:38.339265   60422 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/proxy-client.crt with IP's: []
	I1115 09:41:38.520170   60422 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/proxy-client.crt ...
	I1115 09:41:38.520204   60422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/proxy-client.crt: {Name:mkde5c047e033a4775bfd74c851129f97f744b64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:41:38.520408   60422 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/proxy-client.key ...
	I1115 09:41:38.520425   60422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/proxy-client.key: {Name:mkccf08a52ea888ff6354e4fc12e5615be3a0451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:41:38.520633   60422 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 09:41:38.520669   60422 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem (1082 bytes)
	I1115 09:41:38.520704   60422 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem (1123 bytes)
	I1115 09:41:38.520731   60422 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem (1679 bytes)
	I1115 09:41:38.521370   60422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 09:41:38.539153   60422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 09:41:38.555926   60422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 09:41:38.572674   60422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 09:41:38.589496   60422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1115 09:41:38.606693   60422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 09:41:38.623703   60422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 09:41:38.640357   60422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 09:41:38.657318   60422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 09:41:38.676071   60422 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 09:41:38.687871   60422 ssh_runner.go:195] Run: openssl version
	I1115 09:41:38.693597   60422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 09:41:38.703432   60422 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:41:38.706982   60422 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:41 /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:41:38.707037   60422 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:41:38.742177   60422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 09:41:38.751621   60422 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 09:41:38.755384   60422 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 09:41:38.755459   60422 kubeadm.go:401] StartCluster: {Name:addons-209049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-209049 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:41:38.755551   60422 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:41:38.755608   60422 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:41:38.781921   60422 cri.go:89] found id: ""
	I1115 09:41:38.782001   60422 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 09:41:38.790481   60422 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 09:41:38.798156   60422 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 09:41:38.798214   60422 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 09:41:38.805518   60422 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 09:41:38.805538   60422 kubeadm.go:158] found existing configuration files:
	
	I1115 09:41:38.805587   60422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 09:41:38.812827   60422 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 09:41:38.812867   60422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 09:41:38.819744   60422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 09:41:38.827139   60422 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 09:41:38.827186   60422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 09:41:38.834190   60422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 09:41:38.841531   60422 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 09:41:38.841575   60422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 09:41:38.848598   60422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 09:41:38.855883   60422 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 09:41:38.855929   60422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 09:41:38.863051   60422 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 09:41:38.898909   60422 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 09:41:38.899027   60422 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 09:41:38.918107   60422 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 09:41:38.918223   60422 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1115 09:41:38.918299   60422 kubeadm.go:319] OS: Linux
	I1115 09:41:38.918361   60422 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 09:41:38.918420   60422 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1115 09:41:38.918483   60422 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 09:41:38.918546   60422 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 09:41:38.918608   60422 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 09:41:38.918673   60422 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 09:41:38.918744   60422 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 09:41:38.918820   60422 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 09:41:38.918892   60422 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1115 09:41:38.973717   60422 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 09:41:38.973839   60422 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 09:41:38.973982   60422 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 09:41:38.981938   60422 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1115 09:41:38.983969   60422 out.go:252]   - Generating certificates and keys ...
	I1115 09:41:38.984052   60422 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 09:41:38.984138   60422 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 09:41:39.424827   60422 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 09:41:39.676914   60422 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 09:41:39.737304   60422 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 09:41:40.363199   60422 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 09:41:40.632277   60422 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 09:41:40.632464   60422 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-209049 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1115 09:41:40.894113   60422 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 09:41:40.894329   60422 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-209049 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1115 09:41:41.068199   60422 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 09:41:41.143395   60422 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 09:41:41.351299   60422 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 09:41:41.351384   60422 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 09:41:41.495618   60422 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 09:41:41.794787   60422 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 09:41:41.877840   60422 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 09:41:42.061273   60422 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 09:41:42.443831   60422 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 09:41:42.444327   60422 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 09:41:42.448008   60422 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1115 09:41:42.449533   60422 out.go:252]   - Booting up control plane ...
	I1115 09:41:42.449677   60422 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 09:41:42.449808   60422 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 09:41:42.450391   60422 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 09:41:42.478151   60422 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 09:41:42.478294   60422 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1115 09:41:42.485496   60422 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1115 09:41:42.485773   60422 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 09:41:42.485843   60422 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 09:41:42.579049   60422 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1115 09:41:42.579215   60422 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1115 09:41:43.580864   60422 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00193317s
	I1115 09:41:43.584729   60422 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 09:41:43.584857   60422 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1115 09:41:43.585006   60422 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 09:41:43.585148   60422 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1115 09:41:46.699762   60422 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.115076181s
	I1115 09:41:46.993159   60422 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.408389221s
	I1115 09:41:48.086752   60422 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501992652s
	I1115 09:41:48.097615   60422 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 09:41:48.107369   60422 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 09:41:48.117662   60422 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 09:41:48.117995   60422 kubeadm.go:319] [mark-control-plane] Marking the node addons-209049 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 09:41:48.125947   60422 kubeadm.go:319] [bootstrap-token] Using token: 58g2md.y7ij3qnn2hkj3vkt
	I1115 09:41:48.127285   60422 out.go:252]   - Configuring RBAC rules ...
	I1115 09:41:48.127467   60422 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 09:41:48.131562   60422 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 09:41:48.136141   60422 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 09:41:48.138531   60422 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 09:41:48.140770   60422 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 09:41:48.143900   60422 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 09:41:48.492025   60422 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 09:41:48.905759   60422 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 09:41:49.491940   60422 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 09:41:49.492896   60422 kubeadm.go:319] 
	I1115 09:41:49.493024   60422 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 09:41:49.493050   60422 kubeadm.go:319] 
	I1115 09:41:49.493120   60422 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 09:41:49.493144   60422 kubeadm.go:319] 
	I1115 09:41:49.493194   60422 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 09:41:49.493287   60422 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 09:41:49.493366   60422 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 09:41:49.493377   60422 kubeadm.go:319] 
	I1115 09:41:49.493472   60422 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 09:41:49.493481   60422 kubeadm.go:319] 
	I1115 09:41:49.493538   60422 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 09:41:49.493563   60422 kubeadm.go:319] 
	I1115 09:41:49.493641   60422 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 09:41:49.493733   60422 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 09:41:49.493834   60422 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 09:41:49.493843   60422 kubeadm.go:319] 
	I1115 09:41:49.493938   60422 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 09:41:49.494064   60422 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 09:41:49.494072   60422 kubeadm.go:319] 
	I1115 09:41:49.494199   60422 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 58g2md.y7ij3qnn2hkj3vkt \
	I1115 09:41:49.494292   60422 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c958101d3a67868123c5e439a6c1ea6e30a99a6538b76cbe461e2c919cf1aec \
	I1115 09:41:49.494313   60422 kubeadm.go:319] 	--control-plane 
	I1115 09:41:49.494317   60422 kubeadm.go:319] 
	I1115 09:41:49.494387   60422 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 09:41:49.494393   60422 kubeadm.go:319] 
	I1115 09:41:49.494465   60422 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 58g2md.y7ij3qnn2hkj3vkt \
	I1115 09:41:49.494590   60422 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c958101d3a67868123c5e439a6c1ea6e30a99a6538b76cbe461e2c919cf1aec 
	I1115 09:41:49.496050   60422 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1115 09:41:49.496295   60422 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1115 09:41:49.496396   60422 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 09:41:49.496418   60422 cni.go:84] Creating CNI manager for ""
	I1115 09:41:49.496433   60422 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 09:41:49.497889   60422 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1115 09:41:49.498919   60422 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 09:41:49.503298   60422 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1115 09:41:49.503314   60422 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 09:41:49.516421   60422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 09:41:49.715070   60422 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 09:41:49.715187   60422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:41:49.715211   60422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-209049 minikube.k8s.io/updated_at=2025_11_15T09_41_49_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510 minikube.k8s.io/name=addons-209049 minikube.k8s.io/primary=true
	I1115 09:41:49.796317   60422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:41:49.796394   60422 ops.go:34] apiserver oom_adj: -16
	I1115 09:41:50.296480   60422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:41:50.797319   60422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:41:51.296431   60422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:41:51.797394   60422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:41:52.296412   60422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:41:52.796930   60422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:41:53.296718   60422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:41:53.797164   60422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:41:54.296463   60422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:41:54.796386   60422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:41:55.297152   60422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:41:55.389208   60422 kubeadm.go:1114] duration metric: took 5.674113376s to wait for elevateKubeSystemPrivileges
	I1115 09:41:55.389268   60422 kubeadm.go:403] duration metric: took 16.633804184s to StartCluster
	I1115 09:41:55.389294   60422 settings.go:142] acquiring lock: {Name:mk94cfac0f6eef2479181468a7a8082a6e5c8f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:41:55.389438   60422 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 09:41:55.390095   60422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:41:55.390335   60422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 09:41:55.390384   60422 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 09:41:55.390589   60422 config.go:182] Loaded profile config "addons-209049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:41:55.390428   60422 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1115 09:41:55.390674   60422 addons.go:70] Setting yakd=true in profile "addons-209049"
	I1115 09:41:55.390691   60422 addons.go:239] Setting addon yakd=true in "addons-209049"
	I1115 09:41:55.390713   60422 addons.go:70] Setting inspektor-gadget=true in profile "addons-209049"
	I1115 09:41:55.390733   60422 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:41:55.390738   60422 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-209049"
	I1115 09:41:55.390747   60422 addons.go:70] Setting gcp-auth=true in profile "addons-209049"
	I1115 09:41:55.390755   60422 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-209049"
	I1115 09:41:55.390770   60422 mustload.go:66] Loading cluster: addons-209049
	I1115 09:41:55.390787   60422 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:41:55.390808   60422 addons.go:70] Setting registry-creds=true in profile "addons-209049"
	I1115 09:41:55.390803   60422 addons.go:70] Setting ingress=true in profile "addons-209049"
	I1115 09:41:55.390808   60422 addons.go:70] Setting ingress-dns=true in profile "addons-209049"
	I1115 09:41:55.390850   60422 addons.go:239] Setting addon registry-creds=true in "addons-209049"
	I1115 09:41:55.390853   60422 addons.go:239] Setting addon ingress=true in "addons-209049"
	I1115 09:41:55.390872   60422 addons.go:239] Setting addon ingress-dns=true in "addons-209049"
	I1115 09:41:55.390877   60422 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-209049"
	I1115 09:41:55.390896   60422 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-209049"
	I1115 09:41:55.390902   60422 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:41:55.390913   60422 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:41:55.390919   60422 addons.go:70] Setting cloud-spanner=true in profile "addons-209049"
	I1115 09:41:55.391616   60422 addons.go:70] Setting volumesnapshots=true in profile "addons-209049"
	I1115 09:41:55.391637   60422 addons.go:70] Setting metrics-server=true in profile "addons-209049"
	I1115 09:41:55.391644   60422 addons.go:239] Setting addon volumesnapshots=true in "addons-209049"
	I1115 09:41:55.391657   60422 addons.go:239] Setting addon metrics-server=true in "addons-209049"
	I1115 09:41:55.391670   60422 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:41:55.391689   60422 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:41:55.392080   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:55.392105   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:55.392216   60422 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-209049"
	I1115 09:41:55.392292   60422 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-209049"
	I1115 09:41:55.392295   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:55.392315   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:55.392326   60422 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:41:55.392912   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:55.394064   60422 addons.go:70] Setting volcano=true in profile "addons-209049"
	I1115 09:41:55.394137   60422 addons.go:239] Setting addon volcano=true in "addons-209049"
	I1115 09:41:55.394190   60422 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:41:55.394637   60422 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:41:55.394785   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:55.390923   60422 addons.go:70] Setting registry=true in profile "addons-209049"
	I1115 09:41:55.394985   60422 addons.go:239] Setting addon registry=true in "addons-209049"
	I1115 09:41:55.395019   60422 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:41:55.395080   60422 out.go:179] * Verifying Kubernetes components...
	I1115 09:41:55.391622   60422 addons.go:239] Setting addon cloud-spanner=true in "addons-209049"
	I1115 09:41:55.390731   60422 addons.go:70] Setting default-storageclass=true in profile "addons-209049"
	I1115 09:41:55.391328   60422 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:41:55.391418   60422 config.go:182] Loaded profile config "addons-209049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:41:55.395381   60422 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-209049"
	I1115 09:41:55.395466   60422 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-209049"
	I1115 09:41:55.395916   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:55.396530   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:55.396893   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:55.397360   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:55.397432   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:55.390741   60422 addons.go:239] Setting addon inspektor-gadget=true in "addons-209049"
	I1115 09:41:55.397663   60422 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:41:55.397681   60422 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-209049"
	I1115 09:41:55.398038   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:55.398479   60422 addons.go:70] Setting storage-provisioner=true in profile "addons-209049"
	I1115 09:41:55.398498   60422 addons.go:239] Setting addon storage-provisioner=true in "addons-209049"
	I1115 09:41:55.398559   60422 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:41:55.397663   60422 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:41:55.401018   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:55.401582   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:55.401164   60422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:41:55.411817   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:55.415184   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:55.419158   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:55.449337   60422 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1115 09:41:55.451004   60422 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1115 09:41:55.451031   60422 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1115 09:41:55.451109   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	W1115 09:41:55.453313   60422 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1115 09:41:55.453826   60422 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-209049"
	I1115 09:41:55.454315   60422 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:41:55.455670   60422 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1115 09:41:55.457657   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:55.457786   60422 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1115 09:41:55.458054   60422 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1115 09:41:55.458697   60422 out.go:179]   - Using image docker.io/registry:3.0.0
	I1115 09:41:55.459005   60422 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1115 09:41:55.459022   60422 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1115 09:41:55.459091   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:55.460229   60422 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1115 09:41:55.460248   60422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1115 09:41:55.460298   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:55.460664   60422 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1115 09:41:55.460678   60422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1115 09:41:55.460724   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:55.468434   60422 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1115 09:41:55.472199   60422 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1115 09:41:55.472223   60422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1115 09:41:55.472284   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:55.472640   60422 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1115 09:41:55.472946   60422 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1115 09:41:55.475740   60422 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1115 09:41:55.480079   60422 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1115 09:41:55.480102   60422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1115 09:41:55.480164   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:55.480337   60422 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1115 09:41:55.481393   60422 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1115 09:41:55.482001   60422 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1115 09:41:55.482365   60422 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1115 09:41:55.483329   60422 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1115 09:41:55.483681   60422 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1115 09:41:55.483701   60422 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1115 09:41:55.483766   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:55.484179   60422 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1115 09:41:55.484637   60422 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1115 09:41:55.484684   60422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1115 09:41:55.484771   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:55.489997   60422 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1115 09:41:55.494874   60422 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1115 09:41:55.496043   60422 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1115 09:41:55.497014   60422 addons.go:239] Setting addon default-storageclass=true in "addons-209049"
	I1115 09:41:55.497062   60422 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:41:55.497634   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:55.498806   60422 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1115 09:41:55.500136   60422 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1115 09:41:55.500156   60422 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1115 09:41:55.500220   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:55.504654   60422 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1115 09:41:55.510005   60422 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1115 09:41:55.510589   60422 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1115 09:41:55.510610   60422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1115 09:41:55.510677   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:55.511054   60422 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 09:41:55.511158   60422 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1115 09:41:55.512808   60422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1115 09:41:55.512993   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:55.512015   60422 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1115 09:41:55.512259   60422 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:41:55.514938   60422 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1115 09:41:55.515025   60422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1115 09:41:55.515103   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:55.517667   60422 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 09:41:55.517687   60422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 09:41:55.517739   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:55.525379   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:41:55.527791   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:41:55.539023   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:41:55.541746   60422 out.go:179]   - Using image docker.io/busybox:stable
	I1115 09:41:55.543662   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:41:55.545082   60422 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1115 09:41:55.547169   60422 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 09:41:55.547247   60422 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 09:41:55.547340   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:55.548077   60422 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1115 09:41:55.548099   60422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1115 09:41:55.548154   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:55.558621   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:41:55.561849   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:41:55.564172   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:41:55.564256   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:41:55.564249   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:41:55.564615   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:41:55.566831   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:41:55.569516   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:41:55.571347   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:41:55.578150   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:41:55.579244   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:41:55.585289   60422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 09:41:55.895491   60422 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:41:56.094010   60422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1115 09:41:56.097088   60422 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1115 09:41:56.097115   60422 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1115 09:41:56.178668   60422 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1115 09:41:56.178717   60422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1115 09:41:56.179962   60422 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1115 09:41:56.179988   60422 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1115 09:41:56.180608   60422 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1115 09:41:56.180628   60422 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1115 09:41:56.288694   60422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1115 09:41:56.289065   60422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 09:41:56.289584   60422 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1115 09:41:56.289601   60422 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1115 09:41:56.294994   60422 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1115 09:41:56.295014   60422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1115 09:41:56.375368   60422 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1115 09:41:56.375396   60422 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1115 09:41:56.375863   60422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1115 09:41:56.378195   60422 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1115 09:41:56.378220   60422 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1115 09:41:56.379691   60422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1115 09:41:56.380508   60422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1115 09:41:56.388214   60422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1115 09:41:56.388378   60422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1115 09:41:56.388950   60422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 09:41:56.395686   60422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1115 09:41:56.478334   60422 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1115 09:41:56.478398   60422 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1115 09:41:56.486360   60422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1115 09:41:56.489039   60422 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1115 09:41:56.489066   60422 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1115 09:41:56.576754   60422 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1115 09:41:56.576787   60422 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1115 09:41:56.579230   60422 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1115 09:41:56.579299   60422 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1115 09:41:56.683443   60422 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1115 09:41:56.683472   60422 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1115 09:41:56.774049   60422 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1115 09:41:56.774207   60422 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1115 09:41:56.782835   60422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1115 09:41:56.790923   60422 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1115 09:41:56.790944   60422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1115 09:41:56.893760   60422 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1115 09:41:56.893801   60422 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1115 09:41:56.992642   60422 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1115 09:41:56.992708   60422 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1115 09:41:56.994517   60422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1115 09:41:57.184243   60422 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1115 09:41:57.184274   60422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1115 09:41:57.280597   60422 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1115 09:41:57.280626   60422 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1115 09:41:57.397347   60422 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.811959724s)
	I1115 09:41:57.397386   60422 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1115 09:41:57.398639   60422 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.503112765s)
	I1115 09:41:57.399304   60422 node_ready.go:35] waiting up to 6m0s for node "addons-209049" to be "Ready" ...
	I1115 09:41:57.476166   60422 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1115 09:41:57.476199   60422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1115 09:41:57.479794   60422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1115 09:41:57.789927   60422 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1115 09:41:57.789972   60422 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1115 09:41:57.983339   60422 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-209049" context rescaled to 1 replicas
	I1115 09:41:58.174206   60422 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1115 09:41:58.174243   60422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1115 09:41:58.381746   60422 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1115 09:41:58.381776   60422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1115 09:41:58.486451   60422 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1115 09:41:58.486489   60422 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1115 09:41:58.496520   60422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.207419539s)
	I1115 09:41:58.496897   60422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.402848077s)
	I1115 09:41:58.787727   60422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1115 09:41:59.477190   60422 node_ready.go:57] node "addons-209049" has "Ready":"False" status (will retry)
	I1115 09:42:00.305856   60422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.017116874s)
	I1115 09:42:00.978803   60422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.602896906s)
	I1115 09:42:00.978845   60422 addons.go:480] Verifying addon ingress=true in "addons-209049"
	I1115 09:42:00.978898   60422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.598355525s)
	I1115 09:42:00.978930   60422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.59921401s)
	I1115 09:42:00.979036   60422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.590801259s)
	I1115 09:42:00.979116   60422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.590712497s)
	I1115 09:42:00.979171   60422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.590190642s)
	I1115 09:42:00.979223   60422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.583510046s)
	I1115 09:42:00.979279   60422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.492887332s)
	I1115 09:42:00.979297   60422 addons.go:480] Verifying addon registry=true in "addons-209049"
	I1115 09:42:00.979366   60422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.1964442s)
	I1115 09:42:00.979393   60422 addons.go:480] Verifying addon metrics-server=true in "addons-209049"
	I1115 09:42:00.979441   60422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.984894825s)
	I1115 09:42:00.981531   60422 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-209049 service yakd-dashboard -n yakd-dashboard
	
	I1115 09:42:00.981553   60422 out.go:179] * Verifying ingress addon...
	I1115 09:42:00.981556   60422 out.go:179] * Verifying registry addon...
	I1115 09:42:00.984240   60422 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1115 09:42:00.984257   60422 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1115 09:42:00.987026   60422 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1115 09:42:00.987045   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:00.987149   60422 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1115 09:42:00.987175   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:01.487816   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:01.492609   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:01.786212   60422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.306367581s)
	W1115 09:42:01.786279   60422 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1115 09:42:01.786314   60422 retry.go:31] will retry after 168.267252ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1115 09:42:01.786427   60422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.998579651s)
	I1115 09:42:01.786473   60422 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-209049"
	I1115 09:42:01.787853   60422 out.go:179] * Verifying csi-hostpath-driver addon...
	I1115 09:42:01.789782   60422 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1115 09:42:01.792883   60422 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1115 09:42:01.792981   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1115 09:42:01.903639   60422 node_ready.go:57] node "addons-209049" has "Ready":"False" status (will retry)
	I1115 09:42:01.955044   60422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1115 09:42:01.987560   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:01.987614   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:02.293814   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:02.487734   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:02.487817   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:02.793281   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:02.988176   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:02.988235   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:03.121182   60422 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1115 09:42:03.121262   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:42:03.138990   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:42:03.238301   60422 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1115 09:42:03.250724   60422 addons.go:239] Setting addon gcp-auth=true in "addons-209049"
	I1115 09:42:03.250788   60422 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:42:03.251216   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:42:03.268602   60422 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1115 09:42:03.268664   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:42:03.286048   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:42:03.293947   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:03.487742   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:03.487945   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:03.792707   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:03.987264   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:03.987434   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:04.293399   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1115 09:42:04.402275   60422 node_ready.go:57] node "addons-209049" has "Ready":"False" status (will retry)
	I1115 09:42:04.483112   60422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.528015789s)
	I1115 09:42:04.483233   60422 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.214599353s)
	I1115 09:42:04.485015   60422 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1115 09:42:04.486250   60422 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1115 09:42:04.487325   60422 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1115 09:42:04.487365   60422 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1115 09:42:04.488288   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:04.488531   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:04.500931   60422 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1115 09:42:04.500971   60422 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1115 09:42:04.513566   60422 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1115 09:42:04.513591   60422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1115 09:42:04.526603   60422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1115 09:42:04.793100   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:04.878809   60422 addons.go:480] Verifying addon gcp-auth=true in "addons-209049"
	I1115 09:42:04.880181   60422 out.go:179] * Verifying gcp-auth addon...
	I1115 09:42:04.881911   60422 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1115 09:42:04.893931   60422 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1115 09:42:04.893973   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:04.987779   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:04.987966   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:05.292800   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:05.385436   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:05.487744   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:05.488089   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:05.792464   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:05.885025   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:05.987045   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:05.987257   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:06.293224   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:06.384834   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:42:06.402333   60422 node_ready.go:57] node "addons-209049" has "Ready":"False" status (will retry)
	I1115 09:42:06.486990   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:06.487237   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:06.793502   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:06.885167   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:06.987904   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:06.988063   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:07.293388   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:07.385226   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:07.487682   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:07.487937   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:07.792773   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:07.885474   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:07.987244   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:07.987635   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:08.293412   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:08.385280   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:42:08.402997   60422 node_ready.go:57] node "addons-209049" has "Ready":"False" status (will retry)
	I1115 09:42:08.488282   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:08.488535   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:08.793325   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:08.885174   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:08.987350   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:08.987527   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:09.293738   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:09.385463   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:09.487029   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:09.487085   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:09.792849   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:09.885623   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:09.987689   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:09.987856   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:10.292719   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:10.385462   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:10.487753   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:10.488002   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:10.792606   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:10.885354   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:42:10.902574   60422 node_ready.go:57] node "addons-209049" has "Ready":"False" status (will retry)
	I1115 09:42:10.987212   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:10.987435   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:11.293571   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:11.385338   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:11.487739   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:11.487796   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:11.792425   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:11.885071   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:11.987245   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:11.987311   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:12.293270   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:12.384808   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:12.486998   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:12.487151   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:12.792998   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:12.884470   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:12.987408   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:12.987647   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:13.293884   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:13.385724   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:42:13.402100   60422 node_ready.go:57] node "addons-209049" has "Ready":"False" status (will retry)
	I1115 09:42:13.487928   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:13.488025   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:13.792596   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:13.885268   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:13.986902   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:13.987013   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:14.292636   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:14.385265   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:14.487694   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:14.487917   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:14.792730   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:14.885514   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:14.987701   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:14.987863   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:15.292705   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:15.385332   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:42:15.402741   60422 node_ready.go:57] node "addons-209049" has "Ready":"False" status (will retry)
	I1115 09:42:15.487585   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:15.487751   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:15.792518   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:15.885070   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:15.987246   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:15.987476   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:16.293555   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:16.385230   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:16.487343   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:16.487413   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:16.793260   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:16.884862   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:16.988852   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:16.988908   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:17.292569   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:17.385039   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:17.487226   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:17.487475   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:17.793580   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:17.885042   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:42:17.902320   60422 node_ready.go:57] node "addons-209049" has "Ready":"False" status (will retry)
	I1115 09:42:17.987229   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:17.987341   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:18.293454   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:18.385308   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:18.487314   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:18.487332   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:18.793125   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:18.885347   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:18.987326   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:18.987519   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:19.293441   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:19.385223   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:19.487595   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:19.487676   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:19.793940   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:19.885633   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:19.987618   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:19.987774   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:20.292599   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:20.385438   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:42:20.401732   60422 node_ready.go:57] node "addons-209049" has "Ready":"False" status (will retry)
	I1115 09:42:20.487310   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:20.487538   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:20.793492   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:20.885104   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:20.986845   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:20.986977   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:21.293161   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:21.384938   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:21.486781   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:21.487010   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:21.792873   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:21.885441   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:21.987539   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:21.987712   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:22.292496   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:22.385279   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:42:22.403122   60422 node_ready.go:57] node "addons-209049" has "Ready":"False" status (will retry)
	I1115 09:42:22.487541   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:22.487694   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:22.793342   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:22.884821   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:22.986878   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:22.987114   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:23.293282   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:23.385082   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:23.487320   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:23.487519   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:23.793334   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:23.885298   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:23.987255   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:23.987393   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:24.293339   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:24.385259   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:24.487433   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:24.487578   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:24.793504   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:24.885456   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:42:24.902746   60422 node_ready.go:57] node "addons-209049" has "Ready":"False" status (will retry)
	I1115 09:42:24.987586   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:24.987814   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:25.292437   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:25.385420   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:25.487680   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:25.487833   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:25.792421   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:25.884871   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:25.987294   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:25.987526   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:26.293471   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:26.385109   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:26.487125   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:26.487370   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:26.792911   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:26.885870   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:26.987973   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:26.988084   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:27.292601   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:27.385546   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:42:27.402078   60422 node_ready.go:57] node "addons-209049" has "Ready":"False" status (will retry)
	I1115 09:42:27.487662   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:27.487902   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:27.792691   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:27.885647   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:27.987511   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:27.987744   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:28.294172   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:28.384820   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:28.486771   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:28.486945   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:28.792412   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:28.885169   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:28.986896   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:28.987069   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:29.292829   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:29.385477   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:29.487655   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:29.487723   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:29.792766   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:29.885604   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:42:29.902129   60422 node_ready.go:57] node "addons-209049" has "Ready":"False" status (will retry)
	I1115 09:42:29.987569   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:29.989486   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:30.292625   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:30.385361   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:30.487467   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:30.487593   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:30.793244   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:30.885106   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:30.987062   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:30.987211   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:31.293257   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:31.385122   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:31.487359   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:31.487621   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:31.793528   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:31.885266   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:42:31.902906   60422 node_ready.go:57] node "addons-209049" has "Ready":"False" status (will retry)
	I1115 09:42:31.987708   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:31.987819   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:32.292628   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:32.385399   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:32.487560   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:32.487722   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:32.792541   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:32.885123   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:32.987558   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:32.987702   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:33.293121   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:33.386834   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:33.487357   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:33.487641   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:33.793332   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:33.885314   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:33.987255   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:33.987418   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:34.293063   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:34.384713   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:42:34.402197   60422 node_ready.go:57] node "addons-209049" has "Ready":"False" status (will retry)
	I1115 09:42:34.487802   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:34.488046   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:34.792611   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:34.885137   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:34.987270   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:34.987457   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:35.293354   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:35.385153   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:35.487633   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:35.487828   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:35.792827   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:35.885648   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:35.987671   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:35.987796   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:36.294037   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:36.385421   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:36.474535   60422 node_ready.go:49] node "addons-209049" is "Ready"
	I1115 09:42:36.474566   60422 node_ready.go:38] duration metric: took 39.075238511s for node "addons-209049" to be "Ready" ...
	I1115 09:42:36.474588   60422 api_server.go:52] waiting for apiserver process to appear ...
	I1115 09:42:36.474656   60422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:42:36.488337   60422 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1115 09:42:36.488362   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:36.488710   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:36.493611   60422 api_server.go:72] duration metric: took 41.103191335s to wait for apiserver process to appear ...
	I1115 09:42:36.493639   60422 api_server.go:88] waiting for apiserver healthz status ...
	I1115 09:42:36.493657   60422 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 09:42:36.498219   60422 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1115 09:42:36.499097   60422 api_server.go:141] control plane version: v1.34.1
	I1115 09:42:36.499126   60422 api_server.go:131] duration metric: took 5.478642ms to wait for apiserver health ...
	I1115 09:42:36.499163   60422 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 09:42:36.505817   60422 system_pods.go:59] 20 kube-system pods found
	I1115 09:42:36.505849   60422 system_pods.go:61] "amd-gpu-device-plugin-zxglt" [2d4f6e85-2a3f-4fc9-85de-7a92b0cc0241] Pending
	I1115 09:42:36.505857   60422 system_pods.go:61] "coredns-66bc5c9577-4xn7s" [b5451c3e-5e31-46fc-aca6-1c24946da89a] Pending
	I1115 09:42:36.505863   60422 system_pods.go:61] "csi-hostpath-attacher-0" [75eb60d9-894a-4839-a716-6bfb2f15783b] Pending
	I1115 09:42:36.505869   60422 system_pods.go:61] "csi-hostpath-resizer-0" [557b0a30-f93a-4371-bd28-7ce29fe0f42f] Pending
	I1115 09:42:36.505874   60422 system_pods.go:61] "csi-hostpathplugin-n2grt" [1e55d345-4538-4c11-9d5d-27e7a8026753] Pending
	I1115 09:42:36.505879   60422 system_pods.go:61] "etcd-addons-209049" [dbb94784-4512-4f24-8198-509c45540ddc] Running
	I1115 09:42:36.505884   60422 system_pods.go:61] "kindnet-p4lm7" [99c22a89-4507-418b-bf45-7c29147f179f] Running
	I1115 09:42:36.505889   60422 system_pods.go:61] "kube-apiserver-addons-209049" [db7797f5-d9d4-4fd1-bd3e-ca7cf178a6ed] Running
	I1115 09:42:36.505893   60422 system_pods.go:61] "kube-controller-manager-addons-209049" [1a32ce38-b5b0-481d-980a-42f5e37030cb] Running
	I1115 09:42:36.505910   60422 system_pods.go:61] "kube-ingress-dns-minikube" [48982f43-9ec1-4e9d-a65c-a93e59979a96] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 09:42:36.505920   60422 system_pods.go:61] "kube-proxy-vkr7k" [e066d5c7-375f-467b-af24-507d2a8393bb] Running
	I1115 09:42:36.505928   60422 system_pods.go:61] "kube-scheduler-addons-209049" [e8762fe1-3552-421f-8cd8-0e11d7f817a7] Running
	I1115 09:42:36.505938   60422 system_pods.go:61] "metrics-server-85b7d694d7-sgjrz" [72926a73-00e6-4532-92f6-7db18c63f53f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 09:42:36.505944   60422 system_pods.go:61] "nvidia-device-plugin-daemonset-qtrg4" [2ece1371-7279-4f25-ad2d-270518aadb18] Pending
	I1115 09:42:36.505969   60422 system_pods.go:61] "registry-6b586f9694-fwbg5" [83b0e192-16e8-40e2-9ff5-a5964957dc8a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 09:42:36.505982   60422 system_pods.go:61] "registry-creds-764b6fb674-d6rh5" [e3dfc52f-3ad1-4877-8af4-d96c6b46d6a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 09:42:36.505987   60422 system_pods.go:61] "registry-proxy-xzbqg" [bcc22f7e-dc0a-4976-9707-3acbdc5421f3] Pending
	I1115 09:42:36.505993   60422 system_pods.go:61] "snapshot-controller-7d9fbc56b8-blfn9" [074c2f9d-00a5-4f0d-a757-4cd3d19884a2] Pending
	I1115 09:42:36.506001   60422 system_pods.go:61] "snapshot-controller-7d9fbc56b8-mqtdg" [68c6aac6-c7a8-43e8-aeea-1a1bc2a3ffe7] Pending
	I1115 09:42:36.506008   60422 system_pods.go:61] "storage-provisioner" [5f3882a6-a000-4050-86f4-5d30ce6faeff] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:42:36.506016   60422 system_pods.go:74] duration metric: took 6.842415ms to wait for pod list to return data ...
	I1115 09:42:36.506030   60422 default_sa.go:34] waiting for default service account to be created ...
	I1115 09:42:36.507923   60422 default_sa.go:45] found service account: "default"
	I1115 09:42:36.507945   60422 default_sa.go:55] duration metric: took 1.908701ms for default service account to be created ...
	I1115 09:42:36.507975   60422 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 09:42:36.515011   60422 system_pods.go:86] 20 kube-system pods found
	I1115 09:42:36.515046   60422 system_pods.go:89] "amd-gpu-device-plugin-zxglt" [2d4f6e85-2a3f-4fc9-85de-7a92b0cc0241] Pending
	I1115 09:42:36.515055   60422 system_pods.go:89] "coredns-66bc5c9577-4xn7s" [b5451c3e-5e31-46fc-aca6-1c24946da89a] Pending
	I1115 09:42:36.515061   60422 system_pods.go:89] "csi-hostpath-attacher-0" [75eb60d9-894a-4839-a716-6bfb2f15783b] Pending
	I1115 09:42:36.515073   60422 system_pods.go:89] "csi-hostpath-resizer-0" [557b0a30-f93a-4371-bd28-7ce29fe0f42f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1115 09:42:36.515079   60422 system_pods.go:89] "csi-hostpathplugin-n2grt" [1e55d345-4538-4c11-9d5d-27e7a8026753] Pending
	I1115 09:42:36.515087   60422 system_pods.go:89] "etcd-addons-209049" [dbb94784-4512-4f24-8198-509c45540ddc] Running
	I1115 09:42:36.515093   60422 system_pods.go:89] "kindnet-p4lm7" [99c22a89-4507-418b-bf45-7c29147f179f] Running
	I1115 09:42:36.515099   60422 system_pods.go:89] "kube-apiserver-addons-209049" [db7797f5-d9d4-4fd1-bd3e-ca7cf178a6ed] Running
	I1115 09:42:36.515105   60422 system_pods.go:89] "kube-controller-manager-addons-209049" [1a32ce38-b5b0-481d-980a-42f5e37030cb] Running
	I1115 09:42:36.515122   60422 system_pods.go:89] "kube-ingress-dns-minikube" [48982f43-9ec1-4e9d-a65c-a93e59979a96] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 09:42:36.515128   60422 system_pods.go:89] "kube-proxy-vkr7k" [e066d5c7-375f-467b-af24-507d2a8393bb] Running
	I1115 09:42:36.515134   60422 system_pods.go:89] "kube-scheduler-addons-209049" [e8762fe1-3552-421f-8cd8-0e11d7f817a7] Running
	I1115 09:42:36.515143   60422 system_pods.go:89] "metrics-server-85b7d694d7-sgjrz" [72926a73-00e6-4532-92f6-7db18c63f53f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 09:42:36.515148   60422 system_pods.go:89] "nvidia-device-plugin-daemonset-qtrg4" [2ece1371-7279-4f25-ad2d-270518aadb18] Pending
	I1115 09:42:36.515156   60422 system_pods.go:89] "registry-6b586f9694-fwbg5" [83b0e192-16e8-40e2-9ff5-a5964957dc8a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 09:42:36.515165   60422 system_pods.go:89] "registry-creds-764b6fb674-d6rh5" [e3dfc52f-3ad1-4877-8af4-d96c6b46d6a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 09:42:36.515170   60422 system_pods.go:89] "registry-proxy-xzbqg" [bcc22f7e-dc0a-4976-9707-3acbdc5421f3] Pending
	I1115 09:42:36.515175   60422 system_pods.go:89] "snapshot-controller-7d9fbc56b8-blfn9" [074c2f9d-00a5-4f0d-a757-4cd3d19884a2] Pending
	I1115 09:42:36.515180   60422 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mqtdg" [68c6aac6-c7a8-43e8-aeea-1a1bc2a3ffe7] Pending
	I1115 09:42:36.515186   60422 system_pods.go:89] "storage-provisioner" [5f3882a6-a000-4050-86f4-5d30ce6faeff] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:42:36.515211   60422 retry.go:31] will retry after 292.770926ms: missing components: kube-dns
	I1115 09:42:36.878320   60422 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1115 09:42:36.878351   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:36.881268   60422 system_pods.go:86] 20 kube-system pods found
	I1115 09:42:36.881304   60422 system_pods.go:89] "amd-gpu-device-plugin-zxglt" [2d4f6e85-2a3f-4fc9-85de-7a92b0cc0241] Pending
	I1115 09:42:36.881317   60422 system_pods.go:89] "coredns-66bc5c9577-4xn7s" [b5451c3e-5e31-46fc-aca6-1c24946da89a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 09:42:36.881323   60422 system_pods.go:89] "csi-hostpath-attacher-0" [75eb60d9-894a-4839-a716-6bfb2f15783b] Pending
	I1115 09:42:36.881332   60422 system_pods.go:89] "csi-hostpath-resizer-0" [557b0a30-f93a-4371-bd28-7ce29fe0f42f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1115 09:42:36.881340   60422 system_pods.go:89] "csi-hostpathplugin-n2grt" [1e55d345-4538-4c11-9d5d-27e7a8026753] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1115 09:42:36.881346   60422 system_pods.go:89] "etcd-addons-209049" [dbb94784-4512-4f24-8198-509c45540ddc] Running
	I1115 09:42:36.881352   60422 system_pods.go:89] "kindnet-p4lm7" [99c22a89-4507-418b-bf45-7c29147f179f] Running
	I1115 09:42:36.881358   60422 system_pods.go:89] "kube-apiserver-addons-209049" [db7797f5-d9d4-4fd1-bd3e-ca7cf178a6ed] Running
	I1115 09:42:36.881363   60422 system_pods.go:89] "kube-controller-manager-addons-209049" [1a32ce38-b5b0-481d-980a-42f5e37030cb] Running
	I1115 09:42:36.881373   60422 system_pods.go:89] "kube-ingress-dns-minikube" [48982f43-9ec1-4e9d-a65c-a93e59979a96] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 09:42:36.881378   60422 system_pods.go:89] "kube-proxy-vkr7k" [e066d5c7-375f-467b-af24-507d2a8393bb] Running
	I1115 09:42:36.881384   60422 system_pods.go:89] "kube-scheduler-addons-209049" [e8762fe1-3552-421f-8cd8-0e11d7f817a7] Running
	I1115 09:42:36.881392   60422 system_pods.go:89] "metrics-server-85b7d694d7-sgjrz" [72926a73-00e6-4532-92f6-7db18c63f53f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 09:42:36.881397   60422 system_pods.go:89] "nvidia-device-plugin-daemonset-qtrg4" [2ece1371-7279-4f25-ad2d-270518aadb18] Pending
	I1115 09:42:36.881408   60422 system_pods.go:89] "registry-6b586f9694-fwbg5" [83b0e192-16e8-40e2-9ff5-a5964957dc8a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 09:42:36.881415   60422 system_pods.go:89] "registry-creds-764b6fb674-d6rh5" [e3dfc52f-3ad1-4877-8af4-d96c6b46d6a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 09:42:36.881420   60422 system_pods.go:89] "registry-proxy-xzbqg" [bcc22f7e-dc0a-4976-9707-3acbdc5421f3] Pending
	I1115 09:42:36.881424   60422 system_pods.go:89] "snapshot-controller-7d9fbc56b8-blfn9" [074c2f9d-00a5-4f0d-a757-4cd3d19884a2] Pending
	I1115 09:42:36.881433   60422 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mqtdg" [68c6aac6-c7a8-43e8-aeea-1a1bc2a3ffe7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:42:36.881442   60422 system_pods.go:89] "storage-provisioner" [5f3882a6-a000-4050-86f4-5d30ce6faeff] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:42:36.881468   60422 retry.go:31] will retry after 282.04747ms: missing components: kube-dns
	I1115 09:42:36.885138   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:36.987789   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:36.987880   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:37.180270   60422 system_pods.go:86] 20 kube-system pods found
	I1115 09:42:37.180317   60422 system_pods.go:89] "amd-gpu-device-plugin-zxglt" [2d4f6e85-2a3f-4fc9-85de-7a92b0cc0241] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1115 09:42:37.180329   60422 system_pods.go:89] "coredns-66bc5c9577-4xn7s" [b5451c3e-5e31-46fc-aca6-1c24946da89a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 09:42:37.180340   60422 system_pods.go:89] "csi-hostpath-attacher-0" [75eb60d9-894a-4839-a716-6bfb2f15783b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1115 09:42:37.180349   60422 system_pods.go:89] "csi-hostpath-resizer-0" [557b0a30-f93a-4371-bd28-7ce29fe0f42f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1115 09:42:37.180356   60422 system_pods.go:89] "csi-hostpathplugin-n2grt" [1e55d345-4538-4c11-9d5d-27e7a8026753] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1115 09:42:37.180362   60422 system_pods.go:89] "etcd-addons-209049" [dbb94784-4512-4f24-8198-509c45540ddc] Running
	I1115 09:42:37.180369   60422 system_pods.go:89] "kindnet-p4lm7" [99c22a89-4507-418b-bf45-7c29147f179f] Running
	I1115 09:42:37.180374   60422 system_pods.go:89] "kube-apiserver-addons-209049" [db7797f5-d9d4-4fd1-bd3e-ca7cf178a6ed] Running
	I1115 09:42:37.180379   60422 system_pods.go:89] "kube-controller-manager-addons-209049" [1a32ce38-b5b0-481d-980a-42f5e37030cb] Running
	I1115 09:42:37.180386   60422 system_pods.go:89] "kube-ingress-dns-minikube" [48982f43-9ec1-4e9d-a65c-a93e59979a96] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 09:42:37.180391   60422 system_pods.go:89] "kube-proxy-vkr7k" [e066d5c7-375f-467b-af24-507d2a8393bb] Running
	I1115 09:42:37.180396   60422 system_pods.go:89] "kube-scheduler-addons-209049" [e8762fe1-3552-421f-8cd8-0e11d7f817a7] Running
	I1115 09:42:37.180404   60422 system_pods.go:89] "metrics-server-85b7d694d7-sgjrz" [72926a73-00e6-4532-92f6-7db18c63f53f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 09:42:37.180413   60422 system_pods.go:89] "nvidia-device-plugin-daemonset-qtrg4" [2ece1371-7279-4f25-ad2d-270518aadb18] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1115 09:42:37.180421   60422 system_pods.go:89] "registry-6b586f9694-fwbg5" [83b0e192-16e8-40e2-9ff5-a5964957dc8a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 09:42:37.180429   60422 system_pods.go:89] "registry-creds-764b6fb674-d6rh5" [e3dfc52f-3ad1-4877-8af4-d96c6b46d6a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 09:42:37.180434   60422 system_pods.go:89] "registry-proxy-xzbqg" [bcc22f7e-dc0a-4976-9707-3acbdc5421f3] Pending
	I1115 09:42:37.180442   60422 system_pods.go:89] "snapshot-controller-7d9fbc56b8-blfn9" [074c2f9d-00a5-4f0d-a757-4cd3d19884a2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:42:37.180456   60422 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mqtdg" [68c6aac6-c7a8-43e8-aeea-1a1bc2a3ffe7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:42:37.180465   60422 system_pods.go:89] "storage-provisioner" [5f3882a6-a000-4050-86f4-5d30ce6faeff] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:42:37.180490   60422 retry.go:31] will retry after 336.693004ms: missing components: kube-dns
	I1115 09:42:37.295402   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:37.394987   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:37.495855   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:37.495934   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:37.521803   60422 system_pods.go:86] 20 kube-system pods found
	I1115 09:42:37.521840   60422 system_pods.go:89] "amd-gpu-device-plugin-zxglt" [2d4f6e85-2a3f-4fc9-85de-7a92b0cc0241] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1115 09:42:37.521848   60422 system_pods.go:89] "coredns-66bc5c9577-4xn7s" [b5451c3e-5e31-46fc-aca6-1c24946da89a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 09:42:37.521857   60422 system_pods.go:89] "csi-hostpath-attacher-0" [75eb60d9-894a-4839-a716-6bfb2f15783b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1115 09:42:37.521862   60422 system_pods.go:89] "csi-hostpath-resizer-0" [557b0a30-f93a-4371-bd28-7ce29fe0f42f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1115 09:42:37.521868   60422 system_pods.go:89] "csi-hostpathplugin-n2grt" [1e55d345-4538-4c11-9d5d-27e7a8026753] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1115 09:42:37.521872   60422 system_pods.go:89] "etcd-addons-209049" [dbb94784-4512-4f24-8198-509c45540ddc] Running
	I1115 09:42:37.521876   60422 system_pods.go:89] "kindnet-p4lm7" [99c22a89-4507-418b-bf45-7c29147f179f] Running
	I1115 09:42:37.521880   60422 system_pods.go:89] "kube-apiserver-addons-209049" [db7797f5-d9d4-4fd1-bd3e-ca7cf178a6ed] Running
	I1115 09:42:37.521883   60422 system_pods.go:89] "kube-controller-manager-addons-209049" [1a32ce38-b5b0-481d-980a-42f5e37030cb] Running
	I1115 09:42:37.521890   60422 system_pods.go:89] "kube-ingress-dns-minikube" [48982f43-9ec1-4e9d-a65c-a93e59979a96] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 09:42:37.521896   60422 system_pods.go:89] "kube-proxy-vkr7k" [e066d5c7-375f-467b-af24-507d2a8393bb] Running
	I1115 09:42:37.521899   60422 system_pods.go:89] "kube-scheduler-addons-209049" [e8762fe1-3552-421f-8cd8-0e11d7f817a7] Running
	I1115 09:42:37.521908   60422 system_pods.go:89] "metrics-server-85b7d694d7-sgjrz" [72926a73-00e6-4532-92f6-7db18c63f53f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 09:42:37.521917   60422 system_pods.go:89] "nvidia-device-plugin-daemonset-qtrg4" [2ece1371-7279-4f25-ad2d-270518aadb18] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1115 09:42:37.521922   60422 system_pods.go:89] "registry-6b586f9694-fwbg5" [83b0e192-16e8-40e2-9ff5-a5964957dc8a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 09:42:37.521929   60422 system_pods.go:89] "registry-creds-764b6fb674-d6rh5" [e3dfc52f-3ad1-4877-8af4-d96c6b46d6a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 09:42:37.521934   60422 system_pods.go:89] "registry-proxy-xzbqg" [bcc22f7e-dc0a-4976-9707-3acbdc5421f3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1115 09:42:37.521941   60422 system_pods.go:89] "snapshot-controller-7d9fbc56b8-blfn9" [074c2f9d-00a5-4f0d-a757-4cd3d19884a2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:42:37.521964   60422 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mqtdg" [68c6aac6-c7a8-43e8-aeea-1a1bc2a3ffe7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:42:37.521970   60422 system_pods.go:89] "storage-provisioner" [5f3882a6-a000-4050-86f4-5d30ce6faeff] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:42:37.521989   60422 retry.go:31] will retry after 516.191783ms: missing components: kube-dns
	I1115 09:42:37.793304   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:37.884598   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:37.988291   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:37.988347   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:38.042848   60422 system_pods.go:86] 20 kube-system pods found
	I1115 09:42:38.042887   60422 system_pods.go:89] "amd-gpu-device-plugin-zxglt" [2d4f6e85-2a3f-4fc9-85de-7a92b0cc0241] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1115 09:42:38.042894   60422 system_pods.go:89] "coredns-66bc5c9577-4xn7s" [b5451c3e-5e31-46fc-aca6-1c24946da89a] Running
	I1115 09:42:38.042904   60422 system_pods.go:89] "csi-hostpath-attacher-0" [75eb60d9-894a-4839-a716-6bfb2f15783b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1115 09:42:38.042910   60422 system_pods.go:89] "csi-hostpath-resizer-0" [557b0a30-f93a-4371-bd28-7ce29fe0f42f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1115 09:42:38.042915   60422 system_pods.go:89] "csi-hostpathplugin-n2grt" [1e55d345-4538-4c11-9d5d-27e7a8026753] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1115 09:42:38.042920   60422 system_pods.go:89] "etcd-addons-209049" [dbb94784-4512-4f24-8198-509c45540ddc] Running
	I1115 09:42:38.042924   60422 system_pods.go:89] "kindnet-p4lm7" [99c22a89-4507-418b-bf45-7c29147f179f] Running
	I1115 09:42:38.042928   60422 system_pods.go:89] "kube-apiserver-addons-209049" [db7797f5-d9d4-4fd1-bd3e-ca7cf178a6ed] Running
	I1115 09:42:38.042931   60422 system_pods.go:89] "kube-controller-manager-addons-209049" [1a32ce38-b5b0-481d-980a-42f5e37030cb] Running
	I1115 09:42:38.042940   60422 system_pods.go:89] "kube-ingress-dns-minikube" [48982f43-9ec1-4e9d-a65c-a93e59979a96] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 09:42:38.042944   60422 system_pods.go:89] "kube-proxy-vkr7k" [e066d5c7-375f-467b-af24-507d2a8393bb] Running
	I1115 09:42:38.042950   60422 system_pods.go:89] "kube-scheduler-addons-209049" [e8762fe1-3552-421f-8cd8-0e11d7f817a7] Running
	I1115 09:42:38.042970   60422 system_pods.go:89] "metrics-server-85b7d694d7-sgjrz" [72926a73-00e6-4532-92f6-7db18c63f53f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 09:42:38.042983   60422 system_pods.go:89] "nvidia-device-plugin-daemonset-qtrg4" [2ece1371-7279-4f25-ad2d-270518aadb18] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1115 09:42:38.042994   60422 system_pods.go:89] "registry-6b586f9694-fwbg5" [83b0e192-16e8-40e2-9ff5-a5964957dc8a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 09:42:38.043003   60422 system_pods.go:89] "registry-creds-764b6fb674-d6rh5" [e3dfc52f-3ad1-4877-8af4-d96c6b46d6a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 09:42:38.043008   60422 system_pods.go:89] "registry-proxy-xzbqg" [bcc22f7e-dc0a-4976-9707-3acbdc5421f3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1115 09:42:38.043016   60422 system_pods.go:89] "snapshot-controller-7d9fbc56b8-blfn9" [074c2f9d-00a5-4f0d-a757-4cd3d19884a2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:42:38.043022   60422 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mqtdg" [68c6aac6-c7a8-43e8-aeea-1a1bc2a3ffe7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:42:38.043028   60422 system_pods.go:89] "storage-provisioner" [5f3882a6-a000-4050-86f4-5d30ce6faeff] Running
	I1115 09:42:38.043036   60422 system_pods.go:126] duration metric: took 1.535055494s to wait for k8s-apps to be running ...
	I1115 09:42:38.043047   60422 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 09:42:38.043093   60422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:42:38.056649   60422 system_svc.go:56] duration metric: took 13.592264ms WaitForService to wait for kubelet
	I1115 09:42:38.056675   60422 kubeadm.go:587] duration metric: took 42.666262511s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 09:42:38.056693   60422 node_conditions.go:102] verifying NodePressure condition ...
	I1115 09:42:38.075130   60422 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:42:38.075162   60422 node_conditions.go:123] node cpu capacity is 8
	I1115 09:42:38.075177   60422 node_conditions.go:105] duration metric: took 18.47923ms to run NodePressure ...
	I1115 09:42:38.075189   60422 start.go:242] waiting for startup goroutines ...
	I1115 09:42:38.294373   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:38.385071   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:38.487805   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:38.487980   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:38.793561   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:38.885535   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:38.988250   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:38.988349   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:39.294691   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:39.385652   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:39.488900   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:39.489434   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:39.793025   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:39.886117   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:39.988633   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:39.989018   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:40.296473   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:40.394995   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:40.489027   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:40.489135   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:40.793631   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:40.885854   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:40.988041   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:40.988331   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:41.294567   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:41.386128   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:41.487749   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:41.487791   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:41.794089   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:41.885889   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:41.988352   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:41.988884   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:42.293528   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:42.385693   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:42.488013   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:42.488025   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:42.794413   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:42.885187   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:42.988439   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:42.988570   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:43.294366   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:43.385170   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:43.488571   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:43.488614   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:43.793694   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:43.885633   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:43.987721   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:43.987904   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:44.293695   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:44.394548   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:44.487688   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:44.487742   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:44.794147   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:44.885732   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:44.987784   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:44.988076   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:45.294191   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:45.385028   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:45.488671   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:45.488736   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:45.793056   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:45.885749   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:45.987938   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:45.988029   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:46.293632   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:46.385649   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:46.487668   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:46.487697   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:46.793650   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:46.885333   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:46.987301   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:46.987454   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:47.294091   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:47.385639   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:47.487682   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:47.487754   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:47.792720   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:47.885198   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:47.987892   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:47.987970   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:48.293421   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:48.385232   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:48.488126   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:48.488367   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:48.793225   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:48.884555   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:48.987526   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:48.987729   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:49.293901   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:49.385613   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:49.487408   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:49.487622   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:49.793984   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:49.885446   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:49.987391   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:49.987391   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:50.293941   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:50.385547   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:50.487651   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:50.487684   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:50.793375   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:50.885558   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:50.988153   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:50.988252   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:51.294148   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:51.384742   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:51.487598   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:51.487618   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:51.793074   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:51.885596   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:51.987442   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:51.987481   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:52.294327   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:52.394813   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:52.488118   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:52.488245   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:52.793261   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:52.884780   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:52.987793   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:52.987842   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:53.293574   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:53.385847   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:53.487940   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:53.488056   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:53.793900   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:53.885545   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:53.987595   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:53.987595   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:54.293985   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:54.385861   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:54.487829   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:54.487964   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:54.792576   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:54.885582   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:54.987601   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:54.987643   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:55.293602   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:55.385733   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:55.488258   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:55.488328   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:55.793106   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:55.885791   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:55.987830   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:55.987837   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:56.293753   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:56.385687   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:56.487991   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:56.488030   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:56.793553   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:56.885458   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:56.987208   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:56.987216   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:57.293623   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:57.394131   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:57.487770   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:57.487827   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:57.792733   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:57.885482   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:57.987436   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:57.987634   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:58.294512   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:58.385277   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:58.490293   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:58.490711   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:58.793214   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:58.885603   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:58.987695   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:58.987705   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:59.293631   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:59.385672   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:59.487381   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:59.487427   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:59.793883   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:59.884725   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:59.987478   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:59.987496   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:00.293352   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:00.384904   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:00.487780   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:00.487861   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:00.793040   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:00.885829   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:00.987879   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:00.987977   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:01.293333   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:01.394156   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:01.488071   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:01.488108   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:01.793528   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:01.885087   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:01.988418   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:01.988597   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:02.294473   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:02.385200   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:02.488349   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:02.488607   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:02.793589   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:02.885000   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:02.987888   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:02.987930   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:03.292943   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:03.385871   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:03.487947   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:03.487947   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:03.793493   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:03.885421   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:03.987774   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:03.987940   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:04.293322   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:04.385692   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:04.487829   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:04.487826   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:04.792907   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:04.886019   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:04.988009   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:04.988122   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:05.292266   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:05.384507   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:05.488265   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:05.488481   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:05.793233   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:05.884897   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:05.988277   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:05.988395   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:06.300313   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:06.399738   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:06.487787   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:06.488035   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:06.793542   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:06.884794   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:06.987745   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:06.987843   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:07.293542   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:07.384965   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:07.488300   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:07.488399   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:07.793316   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:07.884745   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:07.987515   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:07.987566   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:08.295372   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:08.385272   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:08.489129   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:08.489263   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:08.793765   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:08.894354   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:08.986887   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:08.987050   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:09.293252   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:09.393529   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:09.494353   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:09.494443   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:09.793060   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:09.885460   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:09.987312   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:09.987444   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:10.293882   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:10.385616   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:10.487752   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:10.487855   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:10.793923   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:10.885731   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:10.988202   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:10.988474   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:11.294352   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:11.385208   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:11.488440   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:11.488493   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:11.793662   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:11.885880   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:11.988073   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:11.988173   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:12.293934   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:12.385583   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:12.487878   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:12.487993   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:12.793467   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:12.885252   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:12.988513   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:12.988568   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:13.293929   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:13.386147   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:13.489901   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:13.490360   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:13.793572   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:13.893814   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:13.987716   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:13.987753   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:14.293551   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:14.394144   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:14.487794   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:14.487890   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:14.792670   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:14.885275   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:14.986807   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:14.986870   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:15.293482   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:15.384924   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:15.488343   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:15.488385   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:15.793704   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:15.885294   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:15.988381   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:15.988660   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:16.293171   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:16.384874   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:16.487854   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:16.487936   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:16.792723   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:16.885321   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:16.987130   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:16.987196   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:17.293529   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:17.385107   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:17.488103   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:17.488148   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:17.793147   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:17.885747   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:17.988772   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:17.989480   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:18.295788   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:18.385510   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:18.487926   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:18.488120   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:18.793246   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:18.885511   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:18.987996   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:18.988203   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:19.294248   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:19.385181   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:19.487471   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:19.487471   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:19.794495   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:19.894936   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:19.995315   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:19.995465   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:20.293827   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:20.385915   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:20.488546   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:20.488710   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:20.794303   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:20.885339   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:20.988705   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:20.988929   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:21.294150   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:21.384905   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:21.487934   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:21.487976   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:21.793242   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:21.884710   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:21.987756   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:21.987836   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:22.293617   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:22.385881   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:22.487723   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:22.487860   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:22.793729   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:22.885180   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:22.988102   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:22.988106   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:23.293178   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:23.384755   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:23.487831   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:23.487870   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:23.793651   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:23.884946   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:23.987971   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:23.987994   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:24.292984   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:24.385622   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:24.487637   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:24.487785   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:24.792754   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:24.885319   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:24.987238   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:24.987273   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:25.293631   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:25.393766   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:25.487788   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:25.487853   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:25.793244   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:25.884801   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:25.987557   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:25.987730   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:26.292795   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:26.385226   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:26.487997   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:26.488085   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:26.793016   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:26.886161   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:26.988310   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:26.988343   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:27.293573   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:27.385251   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:27.488147   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:27.488257   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:27.794124   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:27.894580   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:27.995015   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:27.995083   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:28.293663   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:28.385311   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:28.487680   60422 kapi.go:107] duration metric: took 1m27.50342045s to wait for kubernetes.io/minikube-addons=registry ...
	I1115 09:43:28.487685   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:28.793448   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:28.884925   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:28.987878   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:29.293066   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:29.385560   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:29.487700   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:29.794669   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:29.885323   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:29.989068   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:30.293438   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:30.387225   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:30.489547   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:30.793729   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:30.885670   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:30.988447   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:31.294405   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:31.385482   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:31.487327   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:31.793624   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:31.885399   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:31.987988   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:32.293263   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:32.384810   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:32.488424   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:32.793870   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:32.886562   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:32.989805   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:33.297249   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:33.385554   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:33.488969   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:33.794259   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:33.886486   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:33.989988   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:34.293605   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:34.385299   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:34.490667   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:34.793835   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:34.885995   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:34.987849   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:35.293768   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:35.387382   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:35.488276   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:35.793937   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:35.886585   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:35.987923   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:36.294704   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:36.385460   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:36.487771   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:36.793872   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:36.885760   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:36.988073   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:37.293337   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:37.384825   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:37.488103   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:37.793749   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:37.885150   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:37.988704   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:38.294446   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:38.385525   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:38.487630   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:38.794231   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:38.884986   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:38.988076   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:39.293211   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:39.384702   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:39.487747   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:39.793298   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:39.884921   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:39.987917   60422 kapi.go:107] duration metric: took 1m39.00367176s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1115 09:43:40.293038   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:40.386016   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:40.793083   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:40.885746   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:41.293538   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:41.385421   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:41.794155   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:41.884513   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:42.292803   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:42.385547   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:42.793367   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:42.885007   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:43.293198   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:43.385152   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:43.794544   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:43.885456   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:44.294346   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:44.385328   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:44.794000   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:44.886145   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:45.293815   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:45.394059   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:45.793799   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:45.885636   60422 kapi.go:107] duration metric: took 1m41.003723506s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1115 09:43:45.888085   60422 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-209049 cluster.
	I1115 09:43:45.889451   60422 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1115 09:43:45.890919   60422 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1115 09:43:46.293145   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:46.793125   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:47.293079   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:47.793413   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:48.294001   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:48.793128   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:49.294916   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:49.794587   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:50.294900   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:50.793295   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:51.294272   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:51.793727   60422 kapi.go:107] duration metric: took 1m50.003941095s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1115 09:43:51.795692   60422 out.go:179] * Enabled addons: cloud-spanner, default-storageclass, storage-provisioner-rancher, inspektor-gadget, ingress-dns, registry-creds, storage-provisioner, amd-gpu-device-plugin, nvidia-device-plugin, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1115 09:43:51.797243   60422 addons.go:515] duration metric: took 1m56.406809677s for enable addons: enabled=[cloud-spanner default-storageclass storage-provisioner-rancher inspektor-gadget ingress-dns registry-creds storage-provisioner amd-gpu-device-plugin nvidia-device-plugin metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1115 09:43:51.797295   60422 start.go:247] waiting for cluster config update ...
	I1115 09:43:51.797317   60422 start.go:256] writing updated cluster config ...
	I1115 09:43:51.797597   60422 ssh_runner.go:195] Run: rm -f paused
	I1115 09:43:51.802297   60422 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 09:43:51.805457   60422 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4xn7s" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:43:51.809596   60422 pod_ready.go:94] pod "coredns-66bc5c9577-4xn7s" is "Ready"
	I1115 09:43:51.809619   60422 pod_ready.go:86] duration metric: took 4.139466ms for pod "coredns-66bc5c9577-4xn7s" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:43:51.894024   60422 pod_ready.go:83] waiting for pod "etcd-addons-209049" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:43:51.898633   60422 pod_ready.go:94] pod "etcd-addons-209049" is "Ready"
	I1115 09:43:51.898657   60422 pod_ready.go:86] duration metric: took 4.607036ms for pod "etcd-addons-209049" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:43:51.900682   60422 pod_ready.go:83] waiting for pod "kube-apiserver-addons-209049" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:43:51.904666   60422 pod_ready.go:94] pod "kube-apiserver-addons-209049" is "Ready"
	I1115 09:43:51.904688   60422 pod_ready.go:86] duration metric: took 3.981533ms for pod "kube-apiserver-addons-209049" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:43:51.906510   60422 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-209049" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:43:52.207192   60422 pod_ready.go:94] pod "kube-controller-manager-addons-209049" is "Ready"
	I1115 09:43:52.207229   60422 pod_ready.go:86] duration metric: took 300.696386ms for pod "kube-controller-manager-addons-209049" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:43:52.405894   60422 pod_ready.go:83] waiting for pod "kube-proxy-vkr7k" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:43:52.806067   60422 pod_ready.go:94] pod "kube-proxy-vkr7k" is "Ready"
	I1115 09:43:52.806094   60422 pod_ready.go:86] duration metric: took 400.174126ms for pod "kube-proxy-vkr7k" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:43:53.006153   60422 pod_ready.go:83] waiting for pod "kube-scheduler-addons-209049" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:43:53.406054   60422 pod_ready.go:94] pod "kube-scheduler-addons-209049" is "Ready"
	I1115 09:43:53.406080   60422 pod_ready.go:86] duration metric: took 399.900012ms for pod "kube-scheduler-addons-209049" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:43:53.406092   60422 pod_ready.go:40] duration metric: took 1.603760852s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 09:43:53.453659   60422 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 09:43:53.456563   60422 out.go:179] * Done! kubectl is now configured to use "addons-209049" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 15 09:44:55 addons-209049 crio[898]: time="2025-11-15T09:44:55.181601713Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=fb95dcb2-6b96-4440-a0d1-80226fccf21e name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:44:55 addons-209049 crio[898]: time="2025-11-15T09:44:55.216015863Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=7c800d59-6ac3-42c0-a55f-13f1b3b332ad name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:44:55 addons-209049 crio[898]: time="2025-11-15T09:44:55.220179734Z" level=info msg="Creating container: kube-system/registry-creds-764b6fb674-d6rh5/registry-creds" id=10d6f03a-743c-4774-8122-fcf09e97dce2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 09:44:55 addons-209049 crio[898]: time="2025-11-15T09:44:55.220310284Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:44:55 addons-209049 crio[898]: time="2025-11-15T09:44:55.227064081Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:44:55 addons-209049 crio[898]: time="2025-11-15T09:44:55.227538089Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:44:55 addons-209049 crio[898]: time="2025-11-15T09:44:55.247789763Z" level=info msg="Created container 9c8fb48c2845a5562879a83457c77d601d8d01fdacab0223b315a437b895abad: kube-system/registry-creds-764b6fb674-d6rh5/registry-creds" id=10d6f03a-743c-4774-8122-fcf09e97dce2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 09:44:55 addons-209049 crio[898]: time="2025-11-15T09:44:55.248439493Z" level=info msg="Starting container: 9c8fb48c2845a5562879a83457c77d601d8d01fdacab0223b315a437b895abad" id=f13a9f0e-7302-4edb-bcd1-32d5af457472 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 09:44:55 addons-209049 crio[898]: time="2025-11-15T09:44:55.250252459Z" level=info msg="Started container" PID=8983 containerID=9c8fb48c2845a5562879a83457c77d601d8d01fdacab0223b315a437b895abad description=kube-system/registry-creds-764b6fb674-d6rh5/registry-creds id=f13a9f0e-7302-4edb-bcd1-32d5af457472 name=/runtime.v1.RuntimeService/StartContainer sandboxID=490131c22012c2b00b12f47d456eb9615c31d525902bacc21fc79682a85fbbfd
	Nov 15 09:45:48 addons-209049 crio[898]: time="2025-11-15T09:45:48.890092698Z" level=info msg="Stopping pod sandbox: fd9079109765065d28d0fd58a1286c319ac436b528106ee85d682e858db8d517" id=a10eec97-ec7f-4d3a-854a-928de56c4cc2 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 15 09:45:48 addons-209049 crio[898]: time="2025-11-15T09:45:48.890150281Z" level=info msg="Stopped pod sandbox (already stopped): fd9079109765065d28d0fd58a1286c319ac436b528106ee85d682e858db8d517" id=a10eec97-ec7f-4d3a-854a-928de56c4cc2 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 15 09:45:48 addons-209049 crio[898]: time="2025-11-15T09:45:48.89051108Z" level=info msg="Removing pod sandbox: fd9079109765065d28d0fd58a1286c319ac436b528106ee85d682e858db8d517" id=6b60329a-3bc7-4d7a-aa3d-3b5a4e70a092 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 15 09:45:48 addons-209049 crio[898]: time="2025-11-15T09:45:48.895299803Z" level=info msg="Removed pod sandbox: fd9079109765065d28d0fd58a1286c319ac436b528106ee85d682e858db8d517" id=6b60329a-3bc7-4d7a-aa3d-3b5a4e70a092 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 15 09:46:46 addons-209049 crio[898]: time="2025-11-15T09:46:46.441889067Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-dn92n/POD" id=befb8143-3c6e-4c1d-8d50-9436abfa57cb name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 09:46:46 addons-209049 crio[898]: time="2025-11-15T09:46:46.442021935Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:46:46 addons-209049 crio[898]: time="2025-11-15T09:46:46.447931559Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-dn92n Namespace:default ID:a686c850b6adc42a6ca48d9e18bb45f2884922af30cb7179256269632ae9f73e UID:ae43d79f-edf7-4aea-9a38-775a6f6c5b19 NetNS:/var/run/netns/69ed69f4-9922-4f2c-bcbe-5e4f1ea1be66 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000e12ea8}] Aliases:map[]}"
	Nov 15 09:46:46 addons-209049 crio[898]: time="2025-11-15T09:46:46.447973922Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-dn92n to CNI network \"kindnet\" (type=ptp)"
	Nov 15 09:46:46 addons-209049 crio[898]: time="2025-11-15T09:46:46.458844217Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-dn92n Namespace:default ID:a686c850b6adc42a6ca48d9e18bb45f2884922af30cb7179256269632ae9f73e UID:ae43d79f-edf7-4aea-9a38-775a6f6c5b19 NetNS:/var/run/netns/69ed69f4-9922-4f2c-bcbe-5e4f1ea1be66 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000e12ea8}] Aliases:map[]}"
	Nov 15 09:46:46 addons-209049 crio[898]: time="2025-11-15T09:46:46.459005771Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-dn92n for CNI network kindnet (type=ptp)"
	Nov 15 09:46:46 addons-209049 crio[898]: time="2025-11-15T09:46:46.461380878Z" level=info msg="Ran pod sandbox a686c850b6adc42a6ca48d9e18bb45f2884922af30cb7179256269632ae9f73e with infra container: default/hello-world-app-5d498dc89-dn92n/POD" id=befb8143-3c6e-4c1d-8d50-9436abfa57cb name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 09:46:46 addons-209049 crio[898]: time="2025-11-15T09:46:46.462680056Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=cfd891e3-eefc-42ce-8035-6720b6eec3c0 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:46:46 addons-209049 crio[898]: time="2025-11-15T09:46:46.462798933Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=cfd891e3-eefc-42ce-8035-6720b6eec3c0 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:46:46 addons-209049 crio[898]: time="2025-11-15T09:46:46.462837046Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=cfd891e3-eefc-42ce-8035-6720b6eec3c0 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:46:46 addons-209049 crio[898]: time="2025-11-15T09:46:46.463512249Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=da30b599-14b3-423b-bb80-142fa3f9d07c name=/runtime.v1.ImageService/PullImage
	Nov 15 09:46:46 addons-209049 crio[898]: time="2025-11-15T09:46:46.467515075Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	9c8fb48c2845a       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago   Running             registry-creds                           0                   490131c22012c       registry-creds-764b6fb674-d6rh5            kube-system
	270712cc42869       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                                              2 minutes ago        Running             nginx                                    0                   1718678bfaf2e       nginx                                      default
	ff67f0e87fc96       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago        Running             busybox                                  0                   78000eed37027       busybox                                    default
	3caa65a862513       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago        Running             csi-snapshotter                          0                   a6780b2ebdf30       csi-hostpathplugin-n2grt                   kube-system
	b0ff95e639d4d       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago        Running             csi-provisioner                          0                   a6780b2ebdf30       csi-hostpathplugin-n2grt                   kube-system
	c9bdb51e12a14       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago        Running             liveness-probe                           0                   a6780b2ebdf30       csi-hostpathplugin-n2grt                   kube-system
	e7dcd097399e8       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           3 minutes ago        Running             hostpath                                 0                   a6780b2ebdf30       csi-hostpathplugin-n2grt                   kube-system
	acf4ca35c9f55       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                3 minutes ago        Running             node-driver-registrar                    0                   a6780b2ebdf30       csi-hostpathplugin-n2grt                   kube-system
	57de45fd8698b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 3 minutes ago        Running             gcp-auth                                 0                   7c64127183e0a       gcp-auth-78565c9fb4-jr55m                  gcp-auth
	e64bd6dad227f       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             3 minutes ago        Running             controller                               0                   573aefb125022       ingress-nginx-controller-6c8bf45fb-j4f8b   ingress-nginx
	612149871aec4       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            3 minutes ago        Running             gadget                                   0                   f183d6a201637       gadget-cbnnb                               gadget
	171c9fa6da7f8       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              3 minutes ago        Running             registry-proxy                           0                   fad0e1b51b35b       registry-proxy-xzbqg                       kube-system
	ff456e58c6b53       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago        Running             csi-external-health-monitor-controller   0                   a6780b2ebdf30       csi-hostpathplugin-n2grt                   kube-system
	ee4946da5ae0d       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     3 minutes ago        Running             nvidia-device-plugin-ctr                 0                   f974b5d6bdb65       nvidia-device-plugin-daemonset-qtrg4       kube-system
	103636dd26caf       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   a3d7d91646be5       snapshot-controller-7d9fbc56b8-blfn9       kube-system
	bc5a7bfdb2232       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago        Running             yakd                                     0                   d779fbd48f554       yakd-dashboard-5ff678cb9-5kfrb             yakd-dashboard
	49680c8d74f4f       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago        Running             amd-gpu-device-plugin                    0                   0bd21297d22f1       amd-gpu-device-plugin-zxglt                kube-system
	b667435537d1e       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago        Running             csi-attacher                             0                   a58d0e96ae759       csi-hostpath-attacher-0                    kube-system
	0eebfaee45b60       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   693d7abd0a21d       snapshot-controller-7d9fbc56b8-mqtdg       kube-system
	2e0b28ec2dfa3       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago        Running             csi-resizer                              0                   cd70846b3479b       csi-hostpath-resizer-0                     kube-system
	ee41b9a2e021c       884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45                                                                             3 minutes ago        Exited              patch                                    1                   ca5b8a192d36e       ingress-nginx-admission-patch-d5h7k        ingress-nginx
	25f3037224afb       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   3 minutes ago        Exited              create                                   0                   7928bb55702f4       ingress-nginx-admission-create-fxrnb       ingress-nginx
	5c40e88663ddd       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago        Running             local-path-provisioner                   0                   0c519e741bb0d       local-path-provisioner-648f6765c9-6trtr    local-path-storage
	4c734c004dda0       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago        Running             registry                                 0                   ae6edd7f24add       registry-6b586f9694-fwbg5                  kube-system
	db6577073b2a6       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago        Running             metrics-server                           0                   4e3144467a734       metrics-server-85b7d694d7-sgjrz            kube-system
	da7a6d3454ebc       gcr.io/cloud-spanner-emulator/emulator@sha256:7360f5c5ff4b89d75592d9585fc2d59d207b08ccf262a84edfe79ee0613a7099                               3 minutes ago        Running             cloud-spanner-emulator                   0                   a1765058a99f4       cloud-spanner-emulator-6f9fcf858b-7z68m    default
	ed655dc7f306b       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               4 minutes ago        Running             minikube-ingress-dns                     0                   a36ea2077b2f5       kube-ingress-dns-minikube                  kube-system
	abed161df6f25       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             4 minutes ago        Running             coredns                                  0                   0e6f20b4c3ced       coredns-66bc5c9577-4xn7s                   kube-system
	1bdef9117bea1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             4 minutes ago        Running             storage-provisioner                      0                   6d65d688f91c8       storage-provisioner                        kube-system
	c47933319f25e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago        Running             kindnet-cni                              0                   69744a9050166       kindnet-p4lm7                              kube-system
	bf4bdcddcb90f       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             4 minutes ago        Running             kube-proxy                               0                   547bdcb69a56e       kube-proxy-vkr7k                           kube-system
	2d0c6bfa456fd       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             5 minutes ago        Running             kube-controller-manager                  0                   211afdbd9e242       kube-controller-manager-addons-209049      kube-system
	fc273347a0fa0       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             5 minutes ago        Running             kube-scheduler                           0                   22b016d5b1cb5       kube-scheduler-addons-209049               kube-system
	8f354571302cf       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             5 minutes ago        Running             kube-apiserver                           0                   9b97753985d92       kube-apiserver-addons-209049               kube-system
	1f26d41b1ae72       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             5 minutes ago        Running             etcd                                     0                   eaeefa8b1d51c       etcd-addons-209049                         kube-system
	
	
	==> coredns [abed161df6f25739be955e904c69c8cc237aee9607441efb5b85c298a8066bc3] <==
	[INFO] 10.244.0.22:41370 - 44962 "AAAA IN storage.googleapis.com.southamerica-west1-a.c.k8s-minikube.internal. udp 96 false 1232" NXDOMAIN qr,rd,ra 202 0.006450045s
	[INFO] 10.244.0.22:57119 - 57641 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005058705s
	[INFO] 10.244.0.22:38996 - 60399 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005404149s
	[INFO] 10.244.0.22:34542 - 58722 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004315687s
	[INFO] 10.244.0.22:55544 - 49007 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005427596s
	[INFO] 10.244.0.22:34734 - 43458 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00093051s
	[INFO] 10.244.0.22:36932 - 20069 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 192 0.001197468s
	[INFO] 10.244.0.27:60616 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000234158s
	[INFO] 10.244.0.27:58901 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000195602s
	[INFO] 10.244.0.31:55393 - 16777 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000208514s
	[INFO] 10.244.0.31:48326 - 7885 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000327333s
	[INFO] 10.244.0.31:59031 - 22919 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000097878s
	[INFO] 10.244.0.31:47678 - 53391 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000142065s
	[INFO] 10.244.0.31:41861 - 61676 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000102974s
	[INFO] 10.244.0.31:48478 - 41296 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000127713s
	[INFO] 10.244.0.31:37449 - 43437 "A IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.003003537s
	[INFO] 10.244.0.31:59426 - 44673 "AAAA IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.004022965s
	[INFO] 10.244.0.31:58434 - 50572 "A IN accounts.google.com.southamerica-west1-a.c.k8s-minikube.internal. udp 82 false 512" NXDOMAIN qr,rd,ra 199 0.006136576s
	[INFO] 10.244.0.31:36436 - 7432 "AAAA IN accounts.google.com.southamerica-west1-a.c.k8s-minikube.internal. udp 82 false 512" NXDOMAIN qr,rd,ra 199 0.006495845s
	[INFO] 10.244.0.31:59677 - 47126 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004824123s
	[INFO] 10.244.0.31:34744 - 59413 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.00560434s
	[INFO] 10.244.0.31:53473 - 4066 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.00379722s
	[INFO] 10.244.0.31:36854 - 8492 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004524213s
	[INFO] 10.244.0.31:59229 - 58699 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001647259s
	[INFO] 10.244.0.31:37712 - 65160 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001802245s
	
	
	==> describe nodes <==
	Name:               addons-209049
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-209049
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=addons-209049
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T09_41_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-209049
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-209049"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 09:41:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-209049
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 09:46:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 09:45:53 +0000   Sat, 15 Nov 2025 09:41:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 09:45:53 +0000   Sat, 15 Nov 2025 09:41:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 09:45:53 +0000   Sat, 15 Nov 2025 09:41:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 09:45:53 +0000   Sat, 15 Nov 2025 09:42:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-209049
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                1008282d-3b27-4e3f-97ca-d7ea63ae3248
	  Boot ID:                    6a33f504-3a8c-4d49-8fc5-301cf5275efc
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m54s
	  default                     cloud-spanner-emulator-6f9fcf858b-7z68m     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  default                     hello-world-app-5d498dc89-dn92n             0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  gadget                      gadget-cbnnb                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  gcp-auth                    gcp-auth-78565c9fb4-jr55m                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m43s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-j4f8b    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m47s
	  kube-system                 amd-gpu-device-plugin-zxglt                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 coredns-66bc5c9577-4xn7s                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m52s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 csi-hostpathplugin-n2grt                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 etcd-addons-209049                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m59s
	  kube-system                 kindnet-p4lm7                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m53s
	  kube-system                 kube-apiserver-addons-209049                250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 kube-controller-manager-addons-209049       200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  kube-system                 kube-proxy-vkr7k                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 kube-scheduler-addons-209049                100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 metrics-server-85b7d694d7-sgjrz             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m48s
	  kube-system                 nvidia-device-plugin-daemonset-qtrg4        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 registry-6b586f9694-fwbg5                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  kube-system                 registry-creds-764b6fb674-d6rh5             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 registry-proxy-xzbqg                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 snapshot-controller-7d9fbc56b8-blfn9        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 snapshot-controller-7d9fbc56b8-mqtdg        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  local-path-storage          local-path-provisioner-648f6765c9-6trtr     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-5kfrb              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 4m51s  kube-proxy       
	  Normal   Starting                 4m59s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m59s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m59s  kubelet          Node addons-209049 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m59s  kubelet          Node addons-209049 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m59s  kubelet          Node addons-209049 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m53s  node-controller  Node addons-209049 event: Registered Node addons-209049 in Controller
	  Normal   NodeReady                4m11s  kubelet          Node addons-209049 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.023932] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.604079] kauditd_printk_skb: 47 callbacks suppressed
	[Nov15 09:41] kmem.limit_in_bytes is deprecated and will be removed. Writing any value to this file has no effect. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov15 09:44] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[  +1.059558] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[  +1.023907] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[  +1.023868] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[  +1.023912] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[  +1.023925] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[  +2.047814] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[  +4.031639] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[  +8.127259] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[Nov15 09:45] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[ +32.253211] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	
	
	==> etcd [1f26d41b1ae72e1dc34f54376e004475d2c4c414c8b70169e5095a83b0a7212d] <==
	{"level":"warn","ts":"2025-11-15T09:41:45.598914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:41:45.605940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:41:45.618081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:41:45.624073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:41:45.630360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:41:45.636846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:41:45.643035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:41:45.678665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:41:45.685926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:41:45.692061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:41:45.701326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:41:45.709065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:41:45.714890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:41:45.720577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:41:45.732021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:41:45.779036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:41:45.785103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:41:45.824904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:42:02.181001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:42:02.187459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:42:24.019030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:42:24.025323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:42:24.099418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:42:24.105507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58872","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-15T09:44:12.134741Z","caller":"traceutil/trace.go:172","msg":"trace[395432173] transaction","detail":"{read_only:false; response_revision:1331; number_of_response:1; }","duration":"148.418164ms","start":"2025-11-15T09:44:11.986300Z","end":"2025-11-15T09:44:12.134718Z","steps":["trace[395432173] 'process raft request'  (duration: 134.162007ms)","trace[395432173] 'compare'  (duration: 14.08635ms)"],"step_count":2}
	
	
	==> gcp-auth [57de45fd8698b773c19fc5a8e0495dd8b9e1a7e9a44b7071058464556ee4af16] <==
	2025/11/15 09:43:45 GCP Auth Webhook started!
	2025/11/15 09:43:53 Ready to marshal response ...
	2025/11/15 09:43:53 Ready to write response ...
	2025/11/15 09:43:53 Ready to marshal response ...
	2025/11/15 09:43:53 Ready to write response ...
	2025/11/15 09:43:54 Ready to marshal response ...
	2025/11/15 09:43:54 Ready to write response ...
	2025/11/15 09:44:06 Ready to marshal response ...
	2025/11/15 09:44:06 Ready to write response ...
	2025/11/15 09:44:06 Ready to marshal response ...
	2025/11/15 09:44:06 Ready to write response ...
	2025/11/15 09:44:16 Ready to marshal response ...
	2025/11/15 09:44:16 Ready to write response ...
	2025/11/15 09:44:16 Ready to marshal response ...
	2025/11/15 09:44:16 Ready to write response ...
	2025/11/15 09:44:20 Ready to marshal response ...
	2025/11/15 09:44:20 Ready to write response ...
	2025/11/15 09:44:25 Ready to marshal response ...
	2025/11/15 09:44:25 Ready to write response ...
	2025/11/15 09:44:43 Ready to marshal response ...
	2025/11/15 09:44:43 Ready to write response ...
	2025/11/15 09:46:46 Ready to marshal response ...
	2025/11/15 09:46:46 Ready to write response ...
	
	
	==> kernel <==
	 09:46:47 up  1:29,  0 user,  load average: 0.26, 1.12, 1.70
	Linux addons-209049 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c47933319f25ea439a895d8a214194c80c6c0e5be436f0cda95e520fb61fcc42] <==
	I1115 09:44:45.788122       1 main.go:301] handling current node
	I1115 09:44:55.787768       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:44:55.787798       1 main.go:301] handling current node
	I1115 09:45:05.793393       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:45:05.793435       1 main.go:301] handling current node
	I1115 09:45:15.788030       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:45:15.788063       1 main.go:301] handling current node
	I1115 09:45:25.788130       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:45:25.788164       1 main.go:301] handling current node
	I1115 09:45:35.788847       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:45:35.788880       1 main.go:301] handling current node
	I1115 09:45:45.795008       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:45:45.795048       1 main.go:301] handling current node
	I1115 09:45:55.787302       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:45:55.787331       1 main.go:301] handling current node
	I1115 09:46:05.788817       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:46:05.788854       1 main.go:301] handling current node
	I1115 09:46:15.794076       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:46:15.794108       1 main.go:301] handling current node
	I1115 09:46:25.787816       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:46:25.787846       1 main.go:301] handling current node
	I1115 09:46:35.787562       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:46:35.787618       1 main.go:301] handling current node
	I1115 09:46:45.787332       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:46:45.787366       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8f354571302cfa24ed2fcb9dc1a59908900201f2ee8a9f0985e0cd698988ceff] <==
	E1115 09:42:36.303101       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.172.4:443: connect: connection refused" logger="UnhandledError"
	W1115 09:42:36.302628       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.172.4:443: connect: connection refused
	E1115 09:42:36.303674       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.172.4:443: connect: connection refused" logger="UnhandledError"
	W1115 09:42:36.321427       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.172.4:443: connect: connection refused
	E1115 09:42:36.321459       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.172.4:443: connect: connection refused" logger="UnhandledError"
	W1115 09:42:36.378083       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.172.4:443: connect: connection refused
	E1115 09:42:36.378128       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.172.4:443: connect: connection refused" logger="UnhandledError"
	W1115 09:42:58.159273       1 handler_proxy.go:99] no RequestInfo found in the context
	E1115 09:42:58.159317       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.21.27:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.21.27:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.21.27:443: connect: connection refused" logger="UnhandledError"
	E1115 09:42:58.159367       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1115 09:42:58.159655       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.21.27:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.21.27:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.21.27:443: connect: connection refused" logger="UnhandledError"
	E1115 09:42:58.165626       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.21.27:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.21.27:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.21.27:443: connect: connection refused" logger="UnhandledError"
	E1115 09:42:58.187097       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.21.27:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.21.27:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.21.27:443: connect: connection refused" logger="UnhandledError"
	E1115 09:42:58.229013       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.21.27:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.21.27:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.21.27:443: connect: connection refused" logger="UnhandledError"
	E1115 09:42:58.310113       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.21.27:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.21.27:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.21.27:443: connect: connection refused" logger="UnhandledError"
	I1115 09:42:58.578506       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1115 09:44:06.175037       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47426: use of closed network connection
	E1115 09:44:06.330804       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47456: use of closed network connection
	I1115 09:44:19.937210       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1115 09:44:20.179195       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.48.45"}
	I1115 09:44:38.729783       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1115 09:46:46.198720       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.63.98"}
	
	
	==> kube-controller-manager [2d0c6bfa456fdc1fae3dbb4e08aebd592ec5b3e4b99f82367927552be2506b4b] <==
	I1115 09:41:54.053221       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1115 09:41:54.054244       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1115 09:41:54.054250       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1115 09:41:54.054303       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 09:41:54.054315       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1115 09:41:54.054338       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 09:41:54.054506       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1115 09:41:54.056803       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1115 09:41:54.056874       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1115 09:41:54.057803       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1115 09:41:54.057937       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1115 09:41:54.059798       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 09:41:54.075157       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1115 09:41:54.082765       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1115 09:41:59.896363       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1115 09:42:24.013369       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1115 09:42:24.013551       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1115 09:42:24.013616       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1115 09:42:24.090012       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1115 09:42:24.093590       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1115 09:42:24.114727       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 09:42:24.194507       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 09:42:39.079403       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1115 09:42:54.119826       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1115 09:42:54.203113       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [bf4bdcddcb90f809aee08bb750d5e3282b1083556ad258032140c881f019a634] <==
	I1115 09:41:55.312438       1 server_linux.go:53] "Using iptables proxy"
	I1115 09:41:55.393906       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 09:41:55.496194       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 09:41:55.497244       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1115 09:41:55.497352       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 09:41:55.879978       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 09:41:55.880057       1 server_linux.go:132] "Using iptables Proxier"
	I1115 09:41:55.979305       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 09:41:55.985531       1 server.go:527] "Version info" version="v1.34.1"
	I1115 09:41:55.986131       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 09:41:55.994443       1 config.go:200] "Starting service config controller"
	I1115 09:41:55.994464       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 09:41:55.994489       1 config.go:106] "Starting endpoint slice config controller"
	I1115 09:41:55.994495       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 09:41:55.994508       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 09:41:55.994513       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 09:41:55.995440       1 config.go:309] "Starting node config controller"
	I1115 09:41:55.995450       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 09:41:55.995458       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 09:41:56.094998       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 09:41:56.095051       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 09:41:56.095084       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [fc273347a0fa0ac4a76f1735e0dc29501058daa141499c42e0703edf64d3cc86] <==
	E1115 09:41:46.694922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 09:41:46.694982       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 09:41:46.695580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 09:41:46.695750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 09:41:46.696458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 09:41:46.696589       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 09:41:46.696645       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 09:41:46.696758       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 09:41:46.697241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 09:41:46.697311       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 09:41:46.697381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 09:41:46.697397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 09:41:46.697424       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 09:41:46.697475       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 09:41:46.697553       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 09:41:46.697596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 09:41:46.697600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 09:41:46.697578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 09:41:47.642001       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 09:41:47.664106       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 09:41:47.675973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 09:41:47.699698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 09:41:47.719741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 09:41:47.756848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1115 09:41:48.092551       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 09:44:51 addons-209049 kubelet[1421]: I1115 09:44:51.742243    1421 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/7bcf70f4-c483-4d13-b25c-62317b3da859-gcp-creds\") on node \"addons-209049\" DevicePath \"\""
	Nov 15 09:44:51 addons-209049 kubelet[1421]: I1115 09:44:51.742289    1421 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zpr25\" (UniqueName: \"kubernetes.io/projected/7bcf70f4-c483-4d13-b25c-62317b3da859-kube-api-access-zpr25\") on node \"addons-209049\" DevicePath \"\""
	Nov 15 09:44:51 addons-209049 kubelet[1421]: I1115 09:44:51.742341    1421 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-c64dc917-230d-458c-ad62-1fc5c81f1c33\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^b64c75d9-c207-11f0-85fe-466f007a7678\") on node \"addons-209049\" "
	Nov 15 09:44:51 addons-209049 kubelet[1421]: I1115 09:44:51.747540    1421 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-c64dc917-230d-458c-ad62-1fc5c81f1c33" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^b64c75d9-c207-11f0-85fe-466f007a7678") on node "addons-209049"
	Nov 15 09:44:51 addons-209049 kubelet[1421]: I1115 09:44:51.775253    1421 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-d6rh5" secret="" err="secret \"gcp-auth\" not found"
	Nov 15 09:44:51 addons-209049 kubelet[1421]: W1115 09:44:51.797484    1421 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/95837a795344356a1a114ddf87e92b007ae01ddbe0deac5a406c954fdd9cec8c/crio-490131c22012c2b00b12f47d456eb9615c31d525902bacc21fc79682a85fbbfd WatchSource:0}: Error finding container 490131c22012c2b00b12f47d456eb9615c31d525902bacc21fc79682a85fbbfd: Status 404 returned error can't find the container with id 490131c22012c2b00b12f47d456eb9615c31d525902bacc21fc79682a85fbbfd
	Nov 15 09:44:51 addons-209049 kubelet[1421]: I1115 09:44:51.842723    1421 reconciler_common.go:299] "Volume detached for volume \"pvc-c64dc917-230d-458c-ad62-1fc5c81f1c33\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^b64c75d9-c207-11f0-85fe-466f007a7678\") on node \"addons-209049\" DevicePath \"\""
	Nov 15 09:44:51 addons-209049 kubelet[1421]: I1115 09:44:51.970530    1421 scope.go:117] "RemoveContainer" containerID="116758b31063fa4ae23e45aea7a2c213678160ccdc7976230e4e6d482d4de8bb"
	Nov 15 09:44:51 addons-209049 kubelet[1421]: I1115 09:44:51.979931    1421 scope.go:117] "RemoveContainer" containerID="116758b31063fa4ae23e45aea7a2c213678160ccdc7976230e4e6d482d4de8bb"
	Nov 15 09:44:51 addons-209049 kubelet[1421]: E1115 09:44:51.980343    1421 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"116758b31063fa4ae23e45aea7a2c213678160ccdc7976230e4e6d482d4de8bb\": container with ID starting with 116758b31063fa4ae23e45aea7a2c213678160ccdc7976230e4e6d482d4de8bb not found: ID does not exist" containerID="116758b31063fa4ae23e45aea7a2c213678160ccdc7976230e4e6d482d4de8bb"
	Nov 15 09:44:51 addons-209049 kubelet[1421]: I1115 09:44:51.980393    1421 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"116758b31063fa4ae23e45aea7a2c213678160ccdc7976230e4e6d482d4de8bb"} err="failed to get container status \"116758b31063fa4ae23e45aea7a2c213678160ccdc7976230e4e6d482d4de8bb\": rpc error: code = NotFound desc = could not find container \"116758b31063fa4ae23e45aea7a2c213678160ccdc7976230e4e6d482d4de8bb\": container with ID starting with 116758b31063fa4ae23e45aea7a2c213678160ccdc7976230e4e6d482d4de8bb not found: ID does not exist"
	Nov 15 09:44:52 addons-209049 kubelet[1421]: I1115 09:44:52.779095    1421 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bcf70f4-c483-4d13-b25c-62317b3da859" path="/var/lib/kubelet/pods/7bcf70f4-c483-4d13-b25c-62317b3da859/volumes"
	Nov 15 09:44:54 addons-209049 kubelet[1421]: I1115 09:44:54.775636    1421 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-xzbqg" secret="" err="secret \"gcp-auth\" not found"
	Nov 15 09:44:55 addons-209049 kubelet[1421]: I1115 09:44:55.990683    1421 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-d6rh5" secret="" err="secret \"gcp-auth\" not found"
	Nov 15 09:44:56 addons-209049 kubelet[1421]: I1115 09:44:56.001048    1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-d6rh5" podStartSLOduration=175.618321525 podStartE2EDuration="2m59.001025838s" podCreationTimestamp="2025-11-15 09:41:57 +0000 UTC" firstStartedPulling="2025-11-15 09:44:51.799858651 +0000 UTC m=+183.162714812" lastFinishedPulling="2025-11-15 09:44:55.18256296 +0000 UTC m=+186.545419125" observedRunningTime="2025-11-15 09:44:56.000679589 +0000 UTC m=+187.363535796" watchObservedRunningTime="2025-11-15 09:44:56.001025838 +0000 UTC m=+187.363882038"
	Nov 15 09:44:56 addons-209049 kubelet[1421]: I1115 09:44:56.994650    1421 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-d6rh5" secret="" err="secret \"gcp-auth\" not found"
	Nov 15 09:45:17 addons-209049 kubelet[1421]: I1115 09:45:17.775077    1421 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-fwbg5" secret="" err="secret \"gcp-auth\" not found"
	Nov 15 09:45:44 addons-209049 kubelet[1421]: I1115 09:45:44.775849    1421 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-zxglt" secret="" err="secret \"gcp-auth\" not found"
	Nov 15 09:45:47 addons-209049 kubelet[1421]: I1115 09:45:47.774944    1421 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-qtrg4" secret="" err="secret \"gcp-auth\" not found"
	Nov 15 09:46:05 addons-209049 kubelet[1421]: I1115 09:46:05.775717    1421 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-d6rh5" secret="" err="secret \"gcp-auth\" not found"
	Nov 15 09:46:20 addons-209049 kubelet[1421]: I1115 09:46:20.775338    1421 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-xzbqg" secret="" err="secret \"gcp-auth\" not found"
	Nov 15 09:46:45 addons-209049 kubelet[1421]: I1115 09:46:45.775413    1421 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-fwbg5" secret="" err="secret \"gcp-auth\" not found"
	Nov 15 09:46:46 addons-209049 kubelet[1421]: I1115 09:46:46.289628    1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ae43d79f-edf7-4aea-9a38-775a6f6c5b19-gcp-creds\") pod \"hello-world-app-5d498dc89-dn92n\" (UID: \"ae43d79f-edf7-4aea-9a38-775a6f6c5b19\") " pod="default/hello-world-app-5d498dc89-dn92n"
	Nov 15 09:46:46 addons-209049 kubelet[1421]: I1115 09:46:46.289684    1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9f5s\" (UniqueName: \"kubernetes.io/projected/ae43d79f-edf7-4aea-9a38-775a6f6c5b19-kube-api-access-p9f5s\") pod \"hello-world-app-5d498dc89-dn92n\" (UID: \"ae43d79f-edf7-4aea-9a38-775a6f6c5b19\") " pod="default/hello-world-app-5d498dc89-dn92n"
	Nov 15 09:46:46 addons-209049 kubelet[1421]: W1115 09:46:46.460854    1421 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/95837a795344356a1a114ddf87e92b007ae01ddbe0deac5a406c954fdd9cec8c/crio-a686c850b6adc42a6ca48d9e18bb45f2884922af30cb7179256269632ae9f73e WatchSource:0}: Error finding container a686c850b6adc42a6ca48d9e18bb45f2884922af30cb7179256269632ae9f73e: Status 404 returned error can't find the container with id a686c850b6adc42a6ca48d9e18bb45f2884922af30cb7179256269632ae9f73e
	
	
	==> storage-provisioner [1bdef9117bea1f9beb91b7cd74c36b09d5d21005cb342d6d96771811af8c1a65] <==
	W1115 09:46:22.522224       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:46:24.524904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:46:24.529715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:46:26.532531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:46:26.537066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:46:28.540088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:46:28.543684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:46:30.546579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:46:30.551363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:46:32.553641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:46:32.557597       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:46:34.560550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:46:34.564228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:46:36.567229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:46:36.571836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:46:38.574687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:46:38.578417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:46:40.582102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:46:40.587439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:46:42.590421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:46:42.595171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:46:44.598447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:46:44.602447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:46:46.605433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:46:46.609294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-209049 -n addons-209049
helpers_test.go:269: (dbg) Run:  kubectl --context addons-209049 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-fxrnb ingress-nginx-admission-patch-d5h7k
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-209049 describe pod ingress-nginx-admission-create-fxrnb ingress-nginx-admission-patch-d5h7k
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-209049 describe pod ingress-nginx-admission-create-fxrnb ingress-nginx-admission-patch-d5h7k: exit status 1 (58.009543ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-fxrnb" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-d5h7k" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-209049 describe pod ingress-nginx-admission-create-fxrnb ingress-nginx-admission-patch-d5h7k: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-209049 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-209049 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (250.258586ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:46:48.614678   75198 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:46:48.614793   75198 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:46:48.614802   75198 out.go:374] Setting ErrFile to fd 2...
	I1115 09:46:48.614806   75198 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:46:48.615030   75198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 09:46:48.615380   75198 mustload.go:66] Loading cluster: addons-209049
	I1115 09:46:48.615772   75198 config.go:182] Loaded profile config "addons-209049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:46:48.615792   75198 addons.go:607] checking whether the cluster is paused
	I1115 09:46:48.615878   75198 config.go:182] Loaded profile config "addons-209049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:46:48.615891   75198 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:46:48.616360   75198 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:46:48.634886   75198 ssh_runner.go:195] Run: systemctl --version
	I1115 09:46:48.634943   75198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:46:48.656754   75198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:46:48.748429   75198 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:46:48.748538   75198 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:46:48.780567   75198 cri.go:89] found id: "9c8fb48c2845a5562879a83457c77d601d8d01fdacab0223b315a437b895abad"
	I1115 09:46:48.780593   75198 cri.go:89] found id: "3caa65a862513e243b22e76e1bbf885e3ac037a176ca99b12f91d698216975db"
	I1115 09:46:48.780597   75198 cri.go:89] found id: "b0ff95e639d4d5eb2c0d77d21ada78f8cd1fb53b0f074a034a43d45e26991503"
	I1115 09:46:48.780601   75198 cri.go:89] found id: "c9bdb51e12a14483aa19c676698377a0b60b77720f0628965eea3278bbb8e041"
	I1115 09:46:48.780604   75198 cri.go:89] found id: "e7dcd097399e865e5ae5f0e729bdc0f1048db8b421e3cb038204849e0f017ce4"
	I1115 09:46:48.780607   75198 cri.go:89] found id: "acf4ca35c9f55a5b7ca511482b0cfe1fd19ad4da286e419734932ebc92d08987"
	I1115 09:46:48.780609   75198 cri.go:89] found id: "171c9fa6da7f829033739f1a599a437a0f9fb0490f6e35cb28121675ec6610d5"
	I1115 09:46:48.780611   75198 cri.go:89] found id: "ff456e58c6b5371ae6d2128ce2bd4b5688b2bdd7c3b3eb2b96101bb1b58fe2c5"
	I1115 09:46:48.780614   75198 cri.go:89] found id: "ee4946da5ae0da768ba560dc1b29498a8426405bac06d4d18f0a838d40c80725"
	I1115 09:46:48.780619   75198 cri.go:89] found id: "103636dd26caf3b9549cd406346b20c65628300e437e7be4616d3f0077c744bd"
	I1115 09:46:48.780622   75198 cri.go:89] found id: "49680c8d74f4f54e56ef4ffd5f2ca74f9034bbb45596e528bea07a9727288e4c"
	I1115 09:46:48.780624   75198 cri.go:89] found id: "b667435537d1ec9aa535058d15e290f6af371012bddf3d30095187b5c578a6f7"
	I1115 09:46:48.780628   75198 cri.go:89] found id: "0eebfaee45b609055247f2529eb805a5c5b268a2be72bc3dc72864bae3e6b98f"
	I1115 09:46:48.780632   75198 cri.go:89] found id: "2e0b28ec2dfa3079760cebaa9bb76189b77530f1910306d850dbf67dea10ec99"
	I1115 09:46:48.780636   75198 cri.go:89] found id: "4c734c004dda05124ee316d7d4e813486d10a56625af5fde238c99fe3c7fcb59"
	I1115 09:46:48.780647   75198 cri.go:89] found id: "db6577073b2a6a2e7ebbec11d5b9ea8189c70d1adfad3656b9c47577a4cda077"
	I1115 09:46:48.780655   75198 cri.go:89] found id: "ed655dc7f306b7a351e2adf5b80bffbf7072a9bb72cdb5eac89f029f44b5af1e"
	I1115 09:46:48.780661   75198 cri.go:89] found id: "abed161df6f25739be955e904c69c8cc237aee9607441efb5b85c298a8066bc3"
	I1115 09:46:48.780665   75198 cri.go:89] found id: "1bdef9117bea1f9beb91b7cd74c36b09d5d21005cb342d6d96771811af8c1a65"
	I1115 09:46:48.780669   75198 cri.go:89] found id: "c47933319f25ea439a895d8a214194c80c6c0e5be436f0cda95e520fb61fcc42"
	I1115 09:46:48.780676   75198 cri.go:89] found id: "bf4bdcddcb90f809aee08bb750d5e3282b1083556ad258032140c881f019a634"
	I1115 09:46:48.780680   75198 cri.go:89] found id: "2d0c6bfa456fdc1fae3dbb4e08aebd592ec5b3e4b99f82367927552be2506b4b"
	I1115 09:46:48.780684   75198 cri.go:89] found id: "fc273347a0fa0ac4a76f1735e0dc29501058daa141499c42e0703edf64d3cc86"
	I1115 09:46:48.780688   75198 cri.go:89] found id: "8f354571302cfa24ed2fcb9dc1a59908900201f2ee8a9f0985e0cd698988ceff"
	I1115 09:46:48.780692   75198 cri.go:89] found id: "1f26d41b1ae72e1dc34f54376e004475d2c4c414c8b70169e5095a83b0a7212d"
	I1115 09:46:48.780699   75198 cri.go:89] found id: ""
	I1115 09:46:48.780748   75198 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:46:48.795379   75198 out.go:203] 
	W1115 09:46:48.796560   75198 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:46:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:46:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:46:48.796589   75198 out.go:285] * 
	* 
	W1115 09:46:48.801052   75198 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:46:48.802395   75198 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-209049 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-209049 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-209049 addons disable ingress --alsologtostderr -v=1: exit status 11 (245.698712ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:46:48.865232   75261 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:46:48.865477   75261 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:46:48.865485   75261 out.go:374] Setting ErrFile to fd 2...
	I1115 09:46:48.865490   75261 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:46:48.865665   75261 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 09:46:48.865922   75261 mustload.go:66] Loading cluster: addons-209049
	I1115 09:46:48.866260   75261 config.go:182] Loaded profile config "addons-209049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:46:48.866278   75261 addons.go:607] checking whether the cluster is paused
	I1115 09:46:48.866358   75261 config.go:182] Loaded profile config "addons-209049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:46:48.866370   75261 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:46:48.866720   75261 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:46:48.884782   75261 ssh_runner.go:195] Run: systemctl --version
	I1115 09:46:48.884842   75261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:46:48.904037   75261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:46:48.997597   75261 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:46:48.997731   75261 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:46:49.027324   75261 cri.go:89] found id: "9c8fb48c2845a5562879a83457c77d601d8d01fdacab0223b315a437b895abad"
	I1115 09:46:49.027349   75261 cri.go:89] found id: "3caa65a862513e243b22e76e1bbf885e3ac037a176ca99b12f91d698216975db"
	I1115 09:46:49.027355   75261 cri.go:89] found id: "b0ff95e639d4d5eb2c0d77d21ada78f8cd1fb53b0f074a034a43d45e26991503"
	I1115 09:46:49.027360   75261 cri.go:89] found id: "c9bdb51e12a14483aa19c676698377a0b60b77720f0628965eea3278bbb8e041"
	I1115 09:46:49.027364   75261 cri.go:89] found id: "e7dcd097399e865e5ae5f0e729bdc0f1048db8b421e3cb038204849e0f017ce4"
	I1115 09:46:49.027368   75261 cri.go:89] found id: "acf4ca35c9f55a5b7ca511482b0cfe1fd19ad4da286e419734932ebc92d08987"
	I1115 09:46:49.027371   75261 cri.go:89] found id: "171c9fa6da7f829033739f1a599a437a0f9fb0490f6e35cb28121675ec6610d5"
	I1115 09:46:49.027375   75261 cri.go:89] found id: "ff456e58c6b5371ae6d2128ce2bd4b5688b2bdd7c3b3eb2b96101bb1b58fe2c5"
	I1115 09:46:49.027378   75261 cri.go:89] found id: "ee4946da5ae0da768ba560dc1b29498a8426405bac06d4d18f0a838d40c80725"
	I1115 09:46:49.027385   75261 cri.go:89] found id: "103636dd26caf3b9549cd406346b20c65628300e437e7be4616d3f0077c744bd"
	I1115 09:46:49.027390   75261 cri.go:89] found id: "49680c8d74f4f54e56ef4ffd5f2ca74f9034bbb45596e528bea07a9727288e4c"
	I1115 09:46:49.027393   75261 cri.go:89] found id: "b667435537d1ec9aa535058d15e290f6af371012bddf3d30095187b5c578a6f7"
	I1115 09:46:49.027397   75261 cri.go:89] found id: "0eebfaee45b609055247f2529eb805a5c5b268a2be72bc3dc72864bae3e6b98f"
	I1115 09:46:49.027402   75261 cri.go:89] found id: "2e0b28ec2dfa3079760cebaa9bb76189b77530f1910306d850dbf67dea10ec99"
	I1115 09:46:49.027405   75261 cri.go:89] found id: "4c734c004dda05124ee316d7d4e813486d10a56625af5fde238c99fe3c7fcb59"
	I1115 09:46:49.027435   75261 cri.go:89] found id: "db6577073b2a6a2e7ebbec11d5b9ea8189c70d1adfad3656b9c47577a4cda077"
	I1115 09:46:49.027443   75261 cri.go:89] found id: "ed655dc7f306b7a351e2adf5b80bffbf7072a9bb72cdb5eac89f029f44b5af1e"
	I1115 09:46:49.027448   75261 cri.go:89] found id: "abed161df6f25739be955e904c69c8cc237aee9607441efb5b85c298a8066bc3"
	I1115 09:46:49.027452   75261 cri.go:89] found id: "1bdef9117bea1f9beb91b7cd74c36b09d5d21005cb342d6d96771811af8c1a65"
	I1115 09:46:49.027455   75261 cri.go:89] found id: "c47933319f25ea439a895d8a214194c80c6c0e5be436f0cda95e520fb61fcc42"
	I1115 09:46:49.027459   75261 cri.go:89] found id: "bf4bdcddcb90f809aee08bb750d5e3282b1083556ad258032140c881f019a634"
	I1115 09:46:49.027463   75261 cri.go:89] found id: "2d0c6bfa456fdc1fae3dbb4e08aebd592ec5b3e4b99f82367927552be2506b4b"
	I1115 09:46:49.027466   75261 cri.go:89] found id: "fc273347a0fa0ac4a76f1735e0dc29501058daa141499c42e0703edf64d3cc86"
	I1115 09:46:49.027470   75261 cri.go:89] found id: "8f354571302cfa24ed2fcb9dc1a59908900201f2ee8a9f0985e0cd698988ceff"
	I1115 09:46:49.027475   75261 cri.go:89] found id: "1f26d41b1ae72e1dc34f54376e004475d2c4c414c8b70169e5095a83b0a7212d"
	I1115 09:46:49.027480   75261 cri.go:89] found id: ""
	I1115 09:46:49.027527   75261 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:46:49.041514   75261 out.go:203] 
	W1115 09:46:49.042901   75261 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:46:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:46:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:46:49.042933   75261 out.go:285] * 
	* 
	W1115 09:46:49.047302   75261 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:46:49.048733   75261 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-209049 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (149.36s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-cbnnb" [6d5ee9cc-7112-4cbf-a5af-25487c7ee400] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003964022s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-209049 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-209049 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (240.305391ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:44:27.100549   72075 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:44:27.100674   72075 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:44:27.100683   72075 out.go:374] Setting ErrFile to fd 2...
	I1115 09:44:27.100686   72075 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:44:27.100870   72075 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 09:44:27.101129   72075 mustload.go:66] Loading cluster: addons-209049
	I1115 09:44:27.101469   72075 config.go:182] Loaded profile config "addons-209049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:44:27.101485   72075 addons.go:607] checking whether the cluster is paused
	I1115 09:44:27.101565   72075 config.go:182] Loaded profile config "addons-209049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:44:27.101577   72075 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:44:27.101942   72075 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:44:27.119807   72075 ssh_runner.go:195] Run: systemctl --version
	I1115 09:44:27.119877   72075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:44:27.137240   72075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:44:27.229660   72075 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:44:27.229726   72075 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:44:27.259255   72075 cri.go:89] found id: "3caa65a862513e243b22e76e1bbf885e3ac037a176ca99b12f91d698216975db"
	I1115 09:44:27.259278   72075 cri.go:89] found id: "b0ff95e639d4d5eb2c0d77d21ada78f8cd1fb53b0f074a034a43d45e26991503"
	I1115 09:44:27.259283   72075 cri.go:89] found id: "c9bdb51e12a14483aa19c676698377a0b60b77720f0628965eea3278bbb8e041"
	I1115 09:44:27.259288   72075 cri.go:89] found id: "e7dcd097399e865e5ae5f0e729bdc0f1048db8b421e3cb038204849e0f017ce4"
	I1115 09:44:27.259292   72075 cri.go:89] found id: "acf4ca35c9f55a5b7ca511482b0cfe1fd19ad4da286e419734932ebc92d08987"
	I1115 09:44:27.259297   72075 cri.go:89] found id: "171c9fa6da7f829033739f1a599a437a0f9fb0490f6e35cb28121675ec6610d5"
	I1115 09:44:27.259301   72075 cri.go:89] found id: "ff456e58c6b5371ae6d2128ce2bd4b5688b2bdd7c3b3eb2b96101bb1b58fe2c5"
	I1115 09:44:27.259305   72075 cri.go:89] found id: "ee4946da5ae0da768ba560dc1b29498a8426405bac06d4d18f0a838d40c80725"
	I1115 09:44:27.259309   72075 cri.go:89] found id: "103636dd26caf3b9549cd406346b20c65628300e437e7be4616d3f0077c744bd"
	I1115 09:44:27.259325   72075 cri.go:89] found id: "49680c8d74f4f54e56ef4ffd5f2ca74f9034bbb45596e528bea07a9727288e4c"
	I1115 09:44:27.259331   72075 cri.go:89] found id: "b667435537d1ec9aa535058d15e290f6af371012bddf3d30095187b5c578a6f7"
	I1115 09:44:27.259336   72075 cri.go:89] found id: "0eebfaee45b609055247f2529eb805a5c5b268a2be72bc3dc72864bae3e6b98f"
	I1115 09:44:27.259358   72075 cri.go:89] found id: "2e0b28ec2dfa3079760cebaa9bb76189b77530f1910306d850dbf67dea10ec99"
	I1115 09:44:27.259363   72075 cri.go:89] found id: "4c734c004dda05124ee316d7d4e813486d10a56625af5fde238c99fe3c7fcb59"
	I1115 09:44:27.259368   72075 cri.go:89] found id: "db6577073b2a6a2e7ebbec11d5b9ea8189c70d1adfad3656b9c47577a4cda077"
	I1115 09:44:27.259392   72075 cri.go:89] found id: "ed655dc7f306b7a351e2adf5b80bffbf7072a9bb72cdb5eac89f029f44b5af1e"
	I1115 09:44:27.259404   72075 cri.go:89] found id: "abed161df6f25739be955e904c69c8cc237aee9607441efb5b85c298a8066bc3"
	I1115 09:44:27.259411   72075 cri.go:89] found id: "1bdef9117bea1f9beb91b7cd74c36b09d5d21005cb342d6d96771811af8c1a65"
	I1115 09:44:27.259414   72075 cri.go:89] found id: "c47933319f25ea439a895d8a214194c80c6c0e5be436f0cda95e520fb61fcc42"
	I1115 09:44:27.259417   72075 cri.go:89] found id: "bf4bdcddcb90f809aee08bb750d5e3282b1083556ad258032140c881f019a634"
	I1115 09:44:27.259425   72075 cri.go:89] found id: "2d0c6bfa456fdc1fae3dbb4e08aebd592ec5b3e4b99f82367927552be2506b4b"
	I1115 09:44:27.259430   72075 cri.go:89] found id: "fc273347a0fa0ac4a76f1735e0dc29501058daa141499c42e0703edf64d3cc86"
	I1115 09:44:27.259437   72075 cri.go:89] found id: "8f354571302cfa24ed2fcb9dc1a59908900201f2ee8a9f0985e0cd698988ceff"
	I1115 09:44:27.259442   72075 cri.go:89] found id: "1f26d41b1ae72e1dc34f54376e004475d2c4c414c8b70169e5095a83b0a7212d"
	I1115 09:44:27.259449   72075 cri.go:89] found id: ""
	I1115 09:44:27.259496   72075 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:44:27.273759   72075 out.go:203] 
	W1115 09:44:27.274975   72075 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:44:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:44:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:44:27.275007   72075 out.go:285] * 
	* 
	W1115 09:44:27.279457   72075 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:44:27.280940   72075 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-209049 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.32s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.091814ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-sgjrz" [72926a73-00e6-4532-92f6-7db18c63f53f] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004076363s
addons_test.go:463: (dbg) Run:  kubectl --context addons-209049 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-209049 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-209049 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (246.447452ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:44:21.849496   71202 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:44:21.850796   71202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:44:21.850892   71202 out.go:374] Setting ErrFile to fd 2...
	I1115 09:44:21.851028   71202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:44:21.851315   71202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 09:44:21.851661   71202 mustload.go:66] Loading cluster: addons-209049
	I1115 09:44:21.852072   71202 config.go:182] Loaded profile config "addons-209049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:44:21.852092   71202 addons.go:607] checking whether the cluster is paused
	I1115 09:44:21.852204   71202 config.go:182] Loaded profile config "addons-209049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:44:21.852222   71202 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:44:21.852620   71202 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:44:21.870563   71202 ssh_runner.go:195] Run: systemctl --version
	I1115 09:44:21.870629   71202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:44:21.887569   71202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:44:21.982967   71202 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:44:21.983058   71202 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:44:22.013030   71202 cri.go:89] found id: "3caa65a862513e243b22e76e1bbf885e3ac037a176ca99b12f91d698216975db"
	I1115 09:44:22.013059   71202 cri.go:89] found id: "b0ff95e639d4d5eb2c0d77d21ada78f8cd1fb53b0f074a034a43d45e26991503"
	I1115 09:44:22.013065   71202 cri.go:89] found id: "c9bdb51e12a14483aa19c676698377a0b60b77720f0628965eea3278bbb8e041"
	I1115 09:44:22.013070   71202 cri.go:89] found id: "e7dcd097399e865e5ae5f0e729bdc0f1048db8b421e3cb038204849e0f017ce4"
	I1115 09:44:22.013075   71202 cri.go:89] found id: "acf4ca35c9f55a5b7ca511482b0cfe1fd19ad4da286e419734932ebc92d08987"
	I1115 09:44:22.013081   71202 cri.go:89] found id: "171c9fa6da7f829033739f1a599a437a0f9fb0490f6e35cb28121675ec6610d5"
	I1115 09:44:22.013086   71202 cri.go:89] found id: "ff456e58c6b5371ae6d2128ce2bd4b5688b2bdd7c3b3eb2b96101bb1b58fe2c5"
	I1115 09:44:22.013091   71202 cri.go:89] found id: "ee4946da5ae0da768ba560dc1b29498a8426405bac06d4d18f0a838d40c80725"
	I1115 09:44:22.013095   71202 cri.go:89] found id: "103636dd26caf3b9549cd406346b20c65628300e437e7be4616d3f0077c744bd"
	I1115 09:44:22.013103   71202 cri.go:89] found id: "49680c8d74f4f54e56ef4ffd5f2ca74f9034bbb45596e528bea07a9727288e4c"
	I1115 09:44:22.013114   71202 cri.go:89] found id: "b667435537d1ec9aa535058d15e290f6af371012bddf3d30095187b5c578a6f7"
	I1115 09:44:22.013118   71202 cri.go:89] found id: "0eebfaee45b609055247f2529eb805a5c5b268a2be72bc3dc72864bae3e6b98f"
	I1115 09:44:22.013122   71202 cri.go:89] found id: "2e0b28ec2dfa3079760cebaa9bb76189b77530f1910306d850dbf67dea10ec99"
	I1115 09:44:22.013126   71202 cri.go:89] found id: "4c734c004dda05124ee316d7d4e813486d10a56625af5fde238c99fe3c7fcb59"
	I1115 09:44:22.013129   71202 cri.go:89] found id: "db6577073b2a6a2e7ebbec11d5b9ea8189c70d1adfad3656b9c47577a4cda077"
	I1115 09:44:22.013140   71202 cri.go:89] found id: "ed655dc7f306b7a351e2adf5b80bffbf7072a9bb72cdb5eac89f029f44b5af1e"
	I1115 09:44:22.013159   71202 cri.go:89] found id: "abed161df6f25739be955e904c69c8cc237aee9607441efb5b85c298a8066bc3"
	I1115 09:44:22.013165   71202 cri.go:89] found id: "1bdef9117bea1f9beb91b7cd74c36b09d5d21005cb342d6d96771811af8c1a65"
	I1115 09:44:22.013170   71202 cri.go:89] found id: "c47933319f25ea439a895d8a214194c80c6c0e5be436f0cda95e520fb61fcc42"
	I1115 09:44:22.013177   71202 cri.go:89] found id: "bf4bdcddcb90f809aee08bb750d5e3282b1083556ad258032140c881f019a634"
	I1115 09:44:22.013182   71202 cri.go:89] found id: "2d0c6bfa456fdc1fae3dbb4e08aebd592ec5b3e4b99f82367927552be2506b4b"
	I1115 09:44:22.013186   71202 cri.go:89] found id: "fc273347a0fa0ac4a76f1735e0dc29501058daa141499c42e0703edf64d3cc86"
	I1115 09:44:22.013191   71202 cri.go:89] found id: "8f354571302cfa24ed2fcb9dc1a59908900201f2ee8a9f0985e0cd698988ceff"
	I1115 09:44:22.013195   71202 cri.go:89] found id: "1f26d41b1ae72e1dc34f54376e004475d2c4c414c8b70169e5095a83b0a7212d"
	I1115 09:44:22.013199   71202 cri.go:89] found id: ""
	I1115 09:44:22.013255   71202 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:44:22.028167   71202 out.go:203] 
	W1115 09:44:22.029439   71202 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:44:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:44:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:44:22.029469   71202 out.go:285] * 
	* 
	W1115 09:44:22.033795   71202 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:44:22.035084   71202 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-209049 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.32s)

                                                
                                    
x
+
TestAddons/parallel/CSI (35.44s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1115 09:44:17.382071   58962 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1115 09:44:17.385162   58962 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1115 09:44:17.385183   58962 kapi.go:107] duration metric: took 3.137268ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.146322ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-209049 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-209049 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-209049 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-209049 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-209049 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-209049 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-209049 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-209049 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-209049 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-209049 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-209049 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [bcd34fa6-54b5-4576-a39c-6d79079f529e] Pending
helpers_test.go:352: "task-pv-pod" [bcd34fa6-54b5-4576-a39c-6d79079f529e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [bcd34fa6-54b5-4576-a39c-6d79079f529e] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.003785182s
addons_test.go:572: (dbg) Run:  kubectl --context addons-209049 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-209049 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-209049 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-209049 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-209049 delete pod task-pv-pod: (1.148493757s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-209049 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-209049 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-209049 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-209049 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-209049 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-209049 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [7bcf70f4-c483-4d13-b25c-62317b3da859] Pending
helpers_test.go:352: "task-pv-pod-restore" [7bcf70f4-c483-4d13-b25c-62317b3da859] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [7bcf70f4-c483-4d13-b25c-62317b3da859] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003494896s
addons_test.go:614: (dbg) Run:  kubectl --context addons-209049 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-209049 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-209049 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-209049 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-209049 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (250.871431ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:44:52.380545   72916 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:44:52.380660   72916 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:44:52.380669   72916 out.go:374] Setting ErrFile to fd 2...
	I1115 09:44:52.380673   72916 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:44:52.380884   72916 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 09:44:52.381245   72916 mustload.go:66] Loading cluster: addons-209049
	I1115 09:44:52.381607   72916 config.go:182] Loaded profile config "addons-209049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:44:52.381623   72916 addons.go:607] checking whether the cluster is paused
	I1115 09:44:52.381707   72916 config.go:182] Loaded profile config "addons-209049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:44:52.381720   72916 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:44:52.382112   72916 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:44:52.401801   72916 ssh_runner.go:195] Run: systemctl --version
	I1115 09:44:52.401877   72916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:44:52.421985   72916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:44:52.515912   72916 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:44:52.516013   72916 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:44:52.546367   72916 cri.go:89] found id: "3caa65a862513e243b22e76e1bbf885e3ac037a176ca99b12f91d698216975db"
	I1115 09:44:52.546396   72916 cri.go:89] found id: "b0ff95e639d4d5eb2c0d77d21ada78f8cd1fb53b0f074a034a43d45e26991503"
	I1115 09:44:52.546403   72916 cri.go:89] found id: "c9bdb51e12a14483aa19c676698377a0b60b77720f0628965eea3278bbb8e041"
	I1115 09:44:52.546412   72916 cri.go:89] found id: "e7dcd097399e865e5ae5f0e729bdc0f1048db8b421e3cb038204849e0f017ce4"
	I1115 09:44:52.546416   72916 cri.go:89] found id: "acf4ca35c9f55a5b7ca511482b0cfe1fd19ad4da286e419734932ebc92d08987"
	I1115 09:44:52.546422   72916 cri.go:89] found id: "171c9fa6da7f829033739f1a599a437a0f9fb0490f6e35cb28121675ec6610d5"
	I1115 09:44:52.546426   72916 cri.go:89] found id: "ff456e58c6b5371ae6d2128ce2bd4b5688b2bdd7c3b3eb2b96101bb1b58fe2c5"
	I1115 09:44:52.546430   72916 cri.go:89] found id: "ee4946da5ae0da768ba560dc1b29498a8426405bac06d4d18f0a838d40c80725"
	I1115 09:44:52.546434   72916 cri.go:89] found id: "103636dd26caf3b9549cd406346b20c65628300e437e7be4616d3f0077c744bd"
	I1115 09:44:52.546441   72916 cri.go:89] found id: "49680c8d74f4f54e56ef4ffd5f2ca74f9034bbb45596e528bea07a9727288e4c"
	I1115 09:44:52.546450   72916 cri.go:89] found id: "b667435537d1ec9aa535058d15e290f6af371012bddf3d30095187b5c578a6f7"
	I1115 09:44:52.546454   72916 cri.go:89] found id: "0eebfaee45b609055247f2529eb805a5c5b268a2be72bc3dc72864bae3e6b98f"
	I1115 09:44:52.546458   72916 cri.go:89] found id: "2e0b28ec2dfa3079760cebaa9bb76189b77530f1910306d850dbf67dea10ec99"
	I1115 09:44:52.546466   72916 cri.go:89] found id: "4c734c004dda05124ee316d7d4e813486d10a56625af5fde238c99fe3c7fcb59"
	I1115 09:44:52.546470   72916 cri.go:89] found id: "db6577073b2a6a2e7ebbec11d5b9ea8189c70d1adfad3656b9c47577a4cda077"
	I1115 09:44:52.546499   72916 cri.go:89] found id: "ed655dc7f306b7a351e2adf5b80bffbf7072a9bb72cdb5eac89f029f44b5af1e"
	I1115 09:44:52.546508   72916 cri.go:89] found id: "abed161df6f25739be955e904c69c8cc237aee9607441efb5b85c298a8066bc3"
	I1115 09:44:52.546513   72916 cri.go:89] found id: "1bdef9117bea1f9beb91b7cd74c36b09d5d21005cb342d6d96771811af8c1a65"
	I1115 09:44:52.546517   72916 cri.go:89] found id: "c47933319f25ea439a895d8a214194c80c6c0e5be436f0cda95e520fb61fcc42"
	I1115 09:44:52.546522   72916 cri.go:89] found id: "bf4bdcddcb90f809aee08bb750d5e3282b1083556ad258032140c881f019a634"
	I1115 09:44:52.546530   72916 cri.go:89] found id: "2d0c6bfa456fdc1fae3dbb4e08aebd592ec5b3e4b99f82367927552be2506b4b"
	I1115 09:44:52.546535   72916 cri.go:89] found id: "fc273347a0fa0ac4a76f1735e0dc29501058daa141499c42e0703edf64d3cc86"
	I1115 09:44:52.546543   72916 cri.go:89] found id: "8f354571302cfa24ed2fcb9dc1a59908900201f2ee8a9f0985e0cd698988ceff"
	I1115 09:44:52.546547   72916 cri.go:89] found id: "1f26d41b1ae72e1dc34f54376e004475d2c4c414c8b70169e5095a83b0a7212d"
	I1115 09:44:52.546554   72916 cri.go:89] found id: ""
	I1115 09:44:52.546628   72916 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:44:52.561779   72916 out.go:203] 
	W1115 09:44:52.563137   72916 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:44:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:44:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:44:52.563162   72916 out.go:285] * 
	* 
	W1115 09:44:52.567683   72916 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:44:52.569173   72916 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-209049 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-209049 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-209049 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (245.134158ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:44:52.630395   72974 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:44:52.630518   72974 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:44:52.630527   72974 out.go:374] Setting ErrFile to fd 2...
	I1115 09:44:52.630531   72974 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:44:52.630717   72974 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 09:44:52.630979   72974 mustload.go:66] Loading cluster: addons-209049
	I1115 09:44:52.631339   72974 config.go:182] Loaded profile config "addons-209049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:44:52.631355   72974 addons.go:607] checking whether the cluster is paused
	I1115 09:44:52.631433   72974 config.go:182] Loaded profile config "addons-209049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:44:52.631446   72974 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:44:52.631855   72974 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:44:52.650138   72974 ssh_runner.go:195] Run: systemctl --version
	I1115 09:44:52.650195   72974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:44:52.668143   72974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:44:52.762403   72974 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:44:52.762551   72974 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:44:52.792719   72974 cri.go:89] found id: "3caa65a862513e243b22e76e1bbf885e3ac037a176ca99b12f91d698216975db"
	I1115 09:44:52.792742   72974 cri.go:89] found id: "b0ff95e639d4d5eb2c0d77d21ada78f8cd1fb53b0f074a034a43d45e26991503"
	I1115 09:44:52.792745   72974 cri.go:89] found id: "c9bdb51e12a14483aa19c676698377a0b60b77720f0628965eea3278bbb8e041"
	I1115 09:44:52.792749   72974 cri.go:89] found id: "e7dcd097399e865e5ae5f0e729bdc0f1048db8b421e3cb038204849e0f017ce4"
	I1115 09:44:52.792751   72974 cri.go:89] found id: "acf4ca35c9f55a5b7ca511482b0cfe1fd19ad4da286e419734932ebc92d08987"
	I1115 09:44:52.792755   72974 cri.go:89] found id: "171c9fa6da7f829033739f1a599a437a0f9fb0490f6e35cb28121675ec6610d5"
	I1115 09:44:52.792758   72974 cri.go:89] found id: "ff456e58c6b5371ae6d2128ce2bd4b5688b2bdd7c3b3eb2b96101bb1b58fe2c5"
	I1115 09:44:52.792760   72974 cri.go:89] found id: "ee4946da5ae0da768ba560dc1b29498a8426405bac06d4d18f0a838d40c80725"
	I1115 09:44:52.792763   72974 cri.go:89] found id: "103636dd26caf3b9549cd406346b20c65628300e437e7be4616d3f0077c744bd"
	I1115 09:44:52.792769   72974 cri.go:89] found id: "49680c8d74f4f54e56ef4ffd5f2ca74f9034bbb45596e528bea07a9727288e4c"
	I1115 09:44:52.792772   72974 cri.go:89] found id: "b667435537d1ec9aa535058d15e290f6af371012bddf3d30095187b5c578a6f7"
	I1115 09:44:52.792774   72974 cri.go:89] found id: "0eebfaee45b609055247f2529eb805a5c5b268a2be72bc3dc72864bae3e6b98f"
	I1115 09:44:52.792777   72974 cri.go:89] found id: "2e0b28ec2dfa3079760cebaa9bb76189b77530f1910306d850dbf67dea10ec99"
	I1115 09:44:52.792781   72974 cri.go:89] found id: "4c734c004dda05124ee316d7d4e813486d10a56625af5fde238c99fe3c7fcb59"
	I1115 09:44:52.792783   72974 cri.go:89] found id: "db6577073b2a6a2e7ebbec11d5b9ea8189c70d1adfad3656b9c47577a4cda077"
	I1115 09:44:52.792788   72974 cri.go:89] found id: "ed655dc7f306b7a351e2adf5b80bffbf7072a9bb72cdb5eac89f029f44b5af1e"
	I1115 09:44:52.792792   72974 cri.go:89] found id: "abed161df6f25739be955e904c69c8cc237aee9607441efb5b85c298a8066bc3"
	I1115 09:44:52.792796   72974 cri.go:89] found id: "1bdef9117bea1f9beb91b7cd74c36b09d5d21005cb342d6d96771811af8c1a65"
	I1115 09:44:52.792799   72974 cri.go:89] found id: "c47933319f25ea439a895d8a214194c80c6c0e5be436f0cda95e520fb61fcc42"
	I1115 09:44:52.792801   72974 cri.go:89] found id: "bf4bdcddcb90f809aee08bb750d5e3282b1083556ad258032140c881f019a634"
	I1115 09:44:52.792807   72974 cri.go:89] found id: "2d0c6bfa456fdc1fae3dbb4e08aebd592ec5b3e4b99f82367927552be2506b4b"
	I1115 09:44:52.792809   72974 cri.go:89] found id: "fc273347a0fa0ac4a76f1735e0dc29501058daa141499c42e0703edf64d3cc86"
	I1115 09:44:52.792817   72974 cri.go:89] found id: "8f354571302cfa24ed2fcb9dc1a59908900201f2ee8a9f0985e0cd698988ceff"
	I1115 09:44:52.792820   72974 cri.go:89] found id: "1f26d41b1ae72e1dc34f54376e004475d2c4c414c8b70169e5095a83b0a7212d"
	I1115 09:44:52.792822   72974 cri.go:89] found id: ""
	I1115 09:44:52.792860   72974 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:44:52.807730   72974 out.go:203] 
	W1115 09:44:52.809123   72974 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:44:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:44:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:44:52.809146   72974 out.go:285] * 
	* 
	W1115 09:44:52.813434   72974 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:44:52.814897   72974 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-209049 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (35.44s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-209049 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-209049 --alsologtostderr -v=1: exit status 11 (271.758297ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:44:06.645656   69119 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:44:06.646003   69119 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:44:06.646018   69119 out.go:374] Setting ErrFile to fd 2...
	I1115 09:44:06.646025   69119 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:44:06.646346   69119 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 09:44:06.646669   69119 mustload.go:66] Loading cluster: addons-209049
	I1115 09:44:06.647021   69119 config.go:182] Loaded profile config "addons-209049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:44:06.647037   69119 addons.go:607] checking whether the cluster is paused
	I1115 09:44:06.647121   69119 config.go:182] Loaded profile config "addons-209049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:44:06.647133   69119 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:44:06.647493   69119 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:44:06.666947   69119 ssh_runner.go:195] Run: systemctl --version
	I1115 09:44:06.667026   69119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:44:06.686973   69119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:44:06.784531   69119 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:44:06.784619   69119 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:44:06.816642   69119 cri.go:89] found id: "3caa65a862513e243b22e76e1bbf885e3ac037a176ca99b12f91d698216975db"
	I1115 09:44:06.816668   69119 cri.go:89] found id: "b0ff95e639d4d5eb2c0d77d21ada78f8cd1fb53b0f074a034a43d45e26991503"
	I1115 09:44:06.816675   69119 cri.go:89] found id: "c9bdb51e12a14483aa19c676698377a0b60b77720f0628965eea3278bbb8e041"
	I1115 09:44:06.816681   69119 cri.go:89] found id: "e7dcd097399e865e5ae5f0e729bdc0f1048db8b421e3cb038204849e0f017ce4"
	I1115 09:44:06.816686   69119 cri.go:89] found id: "acf4ca35c9f55a5b7ca511482b0cfe1fd19ad4da286e419734932ebc92d08987"
	I1115 09:44:06.816691   69119 cri.go:89] found id: "171c9fa6da7f829033739f1a599a437a0f9fb0490f6e35cb28121675ec6610d5"
	I1115 09:44:06.816696   69119 cri.go:89] found id: "ff456e58c6b5371ae6d2128ce2bd4b5688b2bdd7c3b3eb2b96101bb1b58fe2c5"
	I1115 09:44:06.816700   69119 cri.go:89] found id: "ee4946da5ae0da768ba560dc1b29498a8426405bac06d4d18f0a838d40c80725"
	I1115 09:44:06.816704   69119 cri.go:89] found id: "103636dd26caf3b9549cd406346b20c65628300e437e7be4616d3f0077c744bd"
	I1115 09:44:06.816712   69119 cri.go:89] found id: "49680c8d74f4f54e56ef4ffd5f2ca74f9034bbb45596e528bea07a9727288e4c"
	I1115 09:44:06.816717   69119 cri.go:89] found id: "b667435537d1ec9aa535058d15e290f6af371012bddf3d30095187b5c578a6f7"
	I1115 09:44:06.816723   69119 cri.go:89] found id: "0eebfaee45b609055247f2529eb805a5c5b268a2be72bc3dc72864bae3e6b98f"
	I1115 09:44:06.816727   69119 cri.go:89] found id: "2e0b28ec2dfa3079760cebaa9bb76189b77530f1910306d850dbf67dea10ec99"
	I1115 09:44:06.816733   69119 cri.go:89] found id: "4c734c004dda05124ee316d7d4e813486d10a56625af5fde238c99fe3c7fcb59"
	I1115 09:44:06.816738   69119 cri.go:89] found id: "db6577073b2a6a2e7ebbec11d5b9ea8189c70d1adfad3656b9c47577a4cda077"
	I1115 09:44:06.816765   69119 cri.go:89] found id: "ed655dc7f306b7a351e2adf5b80bffbf7072a9bb72cdb5eac89f029f44b5af1e"
	I1115 09:44:06.816775   69119 cri.go:89] found id: "abed161df6f25739be955e904c69c8cc237aee9607441efb5b85c298a8066bc3"
	I1115 09:44:06.816780   69119 cri.go:89] found id: "1bdef9117bea1f9beb91b7cd74c36b09d5d21005cb342d6d96771811af8c1a65"
	I1115 09:44:06.816785   69119 cri.go:89] found id: "c47933319f25ea439a895d8a214194c80c6c0e5be436f0cda95e520fb61fcc42"
	I1115 09:44:06.816789   69119 cri.go:89] found id: "bf4bdcddcb90f809aee08bb750d5e3282b1083556ad258032140c881f019a634"
	I1115 09:44:06.816793   69119 cri.go:89] found id: "2d0c6bfa456fdc1fae3dbb4e08aebd592ec5b3e4b99f82367927552be2506b4b"
	I1115 09:44:06.816798   69119 cri.go:89] found id: "fc273347a0fa0ac4a76f1735e0dc29501058daa141499c42e0703edf64d3cc86"
	I1115 09:44:06.816803   69119 cri.go:89] found id: "8f354571302cfa24ed2fcb9dc1a59908900201f2ee8a9f0985e0cd698988ceff"
	I1115 09:44:06.816808   69119 cri.go:89] found id: "1f26d41b1ae72e1dc34f54376e004475d2c4c414c8b70169e5095a83b0a7212d"
	I1115 09:44:06.816812   69119 cri.go:89] found id: ""
	I1115 09:44:06.816867   69119 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:44:06.834395   69119 out.go:203] 
	W1115 09:44:06.835797   69119 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:44:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:44:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:44:06.835845   69119 out.go:285] * 
	* 
	W1115 09:44:06.841892   69119 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:44:06.843836   69119 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-209049 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-209049
helpers_test.go:243: (dbg) docker inspect addons-209049:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "95837a795344356a1a114ddf87e92b007ae01ddbe0deac5a406c954fdd9cec8c",
	        "Created": "2025-11-15T09:41:31.042910437Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 61080,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T09:41:31.079824221Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/95837a795344356a1a114ddf87e92b007ae01ddbe0deac5a406c954fdd9cec8c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/95837a795344356a1a114ddf87e92b007ae01ddbe0deac5a406c954fdd9cec8c/hostname",
	        "HostsPath": "/var/lib/docker/containers/95837a795344356a1a114ddf87e92b007ae01ddbe0deac5a406c954fdd9cec8c/hosts",
	        "LogPath": "/var/lib/docker/containers/95837a795344356a1a114ddf87e92b007ae01ddbe0deac5a406c954fdd9cec8c/95837a795344356a1a114ddf87e92b007ae01ddbe0deac5a406c954fdd9cec8c-json.log",
	        "Name": "/addons-209049",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-209049:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-209049",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "95837a795344356a1a114ddf87e92b007ae01ddbe0deac5a406c954fdd9cec8c",
	                "LowerDir": "/var/lib/docker/overlay2/0bea74c9d174af12d0fdc1cf6c7f4a454922325e50efe47725cba729a0727358-init/diff:/var/lib/docker/overlay2/507c85b6fb43d9c98216fac79d7a8d08dd20b2a63d0fbfc758336e5d38c04044/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0bea74c9d174af12d0fdc1cf6c7f4a454922325e50efe47725cba729a0727358/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0bea74c9d174af12d0fdc1cf6c7f4a454922325e50efe47725cba729a0727358/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0bea74c9d174af12d0fdc1cf6c7f4a454922325e50efe47725cba729a0727358/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-209049",
	                "Source": "/var/lib/docker/volumes/addons-209049/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-209049",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-209049",
	                "name.minikube.sigs.k8s.io": "addons-209049",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "06e2c71b6cbccd1d4c51f6c3805feb68e36ce9441eb0040f47d0ee0bc8c38a66",
	            "SandboxKey": "/var/run/docker/netns/06e2c71b6cbc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-209049": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "26a7626725f3ddc9991ec6ab765481cf6dae3a4fcf9d12ac6a76dd599e86b571",
	                    "EndpointID": "b62a20aea2e494f8b7019f1b0691b31fd48128bf365f9c97203b08d51b7319b5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "8e:ac:c7:05:09:62",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-209049",
	                        "95837a795344"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-209049 -n addons-209049
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-209049 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-209049 logs -n 25: (1.128984424s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-491395 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-491395   │ jenkins │ v1.37.0 │ 15 Nov 25 09:40 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 15 Nov 25 09:40 UTC │ 15 Nov 25 09:40 UTC │
	│ delete  │ -p download-only-491395                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-491395   │ jenkins │ v1.37.0 │ 15 Nov 25 09:40 UTC │ 15 Nov 25 09:40 UTC │
	│ start   │ -o=json --download-only -p download-only-233430 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-233430   │ jenkins │ v1.37.0 │ 15 Nov 25 09:40 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 15 Nov 25 09:41 UTC │ 15 Nov 25 09:41 UTC │
	│ delete  │ -p download-only-233430                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-233430   │ jenkins │ v1.37.0 │ 15 Nov 25 09:41 UTC │ 15 Nov 25 09:41 UTC │
	│ delete  │ -p download-only-491395                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-491395   │ jenkins │ v1.37.0 │ 15 Nov 25 09:41 UTC │ 15 Nov 25 09:41 UTC │
	│ delete  │ -p download-only-233430                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-233430   │ jenkins │ v1.37.0 │ 15 Nov 25 09:41 UTC │ 15 Nov 25 09:41 UTC │
	│ start   │ --download-only -p download-docker-722822 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-722822 │ jenkins │ v1.37.0 │ 15 Nov 25 09:41 UTC │                     │
	│ delete  │ -p download-docker-722822                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-722822 │ jenkins │ v1.37.0 │ 15 Nov 25 09:41 UTC │ 15 Nov 25 09:41 UTC │
	│ start   │ --download-only -p binary-mirror-602120 --alsologtostderr --binary-mirror http://127.0.0.1:40137 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-602120   │ jenkins │ v1.37.0 │ 15 Nov 25 09:41 UTC │                     │
	│ delete  │ -p binary-mirror-602120                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-602120   │ jenkins │ v1.37.0 │ 15 Nov 25 09:41 UTC │ 15 Nov 25 09:41 UTC │
	│ addons  │ disable dashboard -p addons-209049                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-209049          │ jenkins │ v1.37.0 │ 15 Nov 25 09:41 UTC │                     │
	│ addons  │ enable dashboard -p addons-209049                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-209049          │ jenkins │ v1.37.0 │ 15 Nov 25 09:41 UTC │                     │
	│ start   │ -p addons-209049 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-209049          │ jenkins │ v1.37.0 │ 15 Nov 25 09:41 UTC │ 15 Nov 25 09:43 UTC │
	│ addons  │ addons-209049 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-209049          │ jenkins │ v1.37.0 │ 15 Nov 25 09:43 UTC │                     │
	│ addons  │ addons-209049 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-209049          │ jenkins │ v1.37.0 │ 15 Nov 25 09:44 UTC │                     │
	│ addons  │ enable headlamp -p addons-209049 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-209049          │ jenkins │ v1.37.0 │ 15 Nov 25 09:44 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 09:41:06
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 09:41:06.071042   60422 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:41:06.071323   60422 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:41:06.071333   60422 out.go:374] Setting ErrFile to fd 2...
	I1115 09:41:06.071337   60422 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:41:06.071555   60422 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 09:41:06.072108   60422 out.go:368] Setting JSON to false
	I1115 09:41:06.072942   60422 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":5003,"bootTime":1763194663,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 09:41:06.073070   60422 start.go:143] virtualization: kvm guest
	I1115 09:41:06.075011   60422 out.go:179] * [addons-209049] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 09:41:06.076040   60422 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 09:41:06.076038   60422 notify.go:221] Checking for updates...
	I1115 09:41:06.077039   60422 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:41:06.078187   60422 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 09:41:06.079255   60422 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-55448/.minikube
	I1115 09:41:06.080197   60422 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 09:41:06.081325   60422 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 09:41:06.082536   60422 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:41:06.109449   60422 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 09:41:06.109555   60422 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:41:06.165866   60422 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:46 SystemTime:2025-11-15 09:41:06.156299852 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:41:06.165998   60422 docker.go:319] overlay module found
	I1115 09:41:06.167669   60422 out.go:179] * Using the docker driver based on user configuration
	I1115 09:41:06.168838   60422 start.go:309] selected driver: docker
	I1115 09:41:06.168853   60422 start.go:930] validating driver "docker" against <nil>
	I1115 09:41:06.168865   60422 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 09:41:06.169771   60422 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:41:06.229504   60422 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:46 SystemTime:2025-11-15 09:41:06.219603445 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:41:06.229683   60422 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 09:41:06.229940   60422 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 09:41:06.231671   60422 out.go:179] * Using Docker driver with root privileges
	I1115 09:41:06.232772   60422 cni.go:84] Creating CNI manager for ""
	I1115 09:41:06.232851   60422 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 09:41:06.232879   60422 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 09:41:06.233002   60422 start.go:353] cluster config:
	{Name:addons-209049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-209049 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1115 09:41:06.234237   60422 out.go:179] * Starting "addons-209049" primary control-plane node in "addons-209049" cluster
	I1115 09:41:06.235265   60422 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 09:41:06.236297   60422 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 09:41:06.237250   60422 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:41:06.237280   60422 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1115 09:41:06.237294   60422 cache.go:65] Caching tarball of preloaded images
	I1115 09:41:06.237328   60422 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 09:41:06.237402   60422 preload.go:238] Found /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 09:41:06.237417   60422 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 09:41:06.237758   60422 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/config.json ...
	I1115 09:41:06.237794   60422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/config.json: {Name:mkdbd1a6c4c4edb33badfd696396c451aa16190d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:41:06.254555   60422 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1115 09:41:06.254723   60422 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory
	I1115 09:41:06.254747   60422 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory, skipping pull
	I1115 09:41:06.254756   60422 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in cache, skipping pull
	I1115 09:41:06.254770   60422 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 as a tarball
	I1115 09:41:06.254798   60422 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 from local cache
	I1115 09:41:21.221530   60422 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 from cached tarball
	I1115 09:41:21.221569   60422 cache.go:243] Successfully downloaded all kic artifacts
	I1115 09:41:21.221634   60422 start.go:360] acquireMachinesLock for addons-209049: {Name:mk6ef50958e20619f4fabbc8361c602d26a1aa95 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:41:21.221744   60422 start.go:364] duration metric: took 88.34µs to acquireMachinesLock for "addons-209049"
	I1115 09:41:21.221783   60422 start.go:93] Provisioning new machine with config: &{Name:addons-209049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-209049 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 09:41:21.221852   60422 start.go:125] createHost starting for "" (driver="docker")
	I1115 09:41:21.224309   60422 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1115 09:41:21.224630   60422 start.go:159] libmachine.API.Create for "addons-209049" (driver="docker")
	I1115 09:41:21.224674   60422 client.go:173] LocalClient.Create starting
	I1115 09:41:21.224836   60422 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem
	I1115 09:41:21.546695   60422 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem
	I1115 09:41:21.774293   60422 cli_runner.go:164] Run: docker network inspect addons-209049 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 09:41:21.792453   60422 cli_runner.go:211] docker network inspect addons-209049 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 09:41:21.792538   60422 network_create.go:284] running [docker network inspect addons-209049] to gather additional debugging logs...
	I1115 09:41:21.792561   60422 cli_runner.go:164] Run: docker network inspect addons-209049
	W1115 09:41:21.808812   60422 cli_runner.go:211] docker network inspect addons-209049 returned with exit code 1
	I1115 09:41:21.808844   60422 network_create.go:287] error running [docker network inspect addons-209049]: docker network inspect addons-209049: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-209049 not found
	I1115 09:41:21.808874   60422 network_create.go:289] output of [docker network inspect addons-209049]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-209049 not found
	
	** /stderr **
	I1115 09:41:21.808998   60422 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 09:41:21.826008   60422 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0015c2a60}
	I1115 09:41:21.826058   60422 network_create.go:124] attempt to create docker network addons-209049 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1115 09:41:21.826111   60422 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-209049 addons-209049
	I1115 09:41:21.871741   60422 network_create.go:108] docker network addons-209049 192.168.49.0/24 created
	I1115 09:41:21.871798   60422 kic.go:121] calculated static IP "192.168.49.2" for the "addons-209049" container
	I1115 09:41:21.871913   60422 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 09:41:21.887163   60422 cli_runner.go:164] Run: docker volume create addons-209049 --label name.minikube.sigs.k8s.io=addons-209049 --label created_by.minikube.sigs.k8s.io=true
	I1115 09:41:21.904141   60422 oci.go:103] Successfully created a docker volume addons-209049
	I1115 09:41:21.904228   60422 cli_runner.go:164] Run: docker run --rm --name addons-209049-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-209049 --entrypoint /usr/bin/test -v addons-209049:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 09:41:26.755019   60422 cli_runner.go:217] Completed: docker run --rm --name addons-209049-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-209049 --entrypoint /usr/bin/test -v addons-209049:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib: (4.850742883s)
	I1115 09:41:26.755056   60422 oci.go:107] Successfully prepared a docker volume addons-209049
	I1115 09:41:26.755097   60422 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:41:26.755113   60422 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 09:41:26.755187   60422 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-209049:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1115 09:41:30.970456   60422 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-209049:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.215224582s)
	I1115 09:41:30.970492   60422 kic.go:203] duration metric: took 4.215377053s to extract preloaded images to volume ...
	W1115 09:41:30.970624   60422 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1115 09:41:30.970732   60422 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 09:41:31.027186   60422 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-209049 --name addons-209049 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-209049 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-209049 --network addons-209049 --ip 192.168.49.2 --volume addons-209049:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 09:41:31.339847   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Running}}
	I1115 09:41:31.358737   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:31.376974   60422 cli_runner.go:164] Run: docker exec addons-209049 stat /var/lib/dpkg/alternatives/iptables
	I1115 09:41:31.422252   60422 oci.go:144] the created container "addons-209049" has a running status.
	I1115 09:41:31.422292   60422 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa...
	I1115 09:41:31.512400   60422 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 09:41:31.538714   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:31.557614   60422 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 09:41:31.557638   60422 kic_runner.go:114] Args: [docker exec --privileged addons-209049 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 09:41:31.600867   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:31.619399   60422 machine.go:94] provisionDockerMachine start ...
	I1115 09:41:31.619486   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:31.642785   60422 main.go:143] libmachine: Using SSH client type: native
	I1115 09:41:31.643154   60422 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1115 09:41:31.643176   60422 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 09:41:31.644474   60422 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48494->127.0.0.1:32768: read: connection reset by peer
	I1115 09:41:34.770904   60422 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-209049
	
	I1115 09:41:34.770938   60422 ubuntu.go:182] provisioning hostname "addons-209049"
	I1115 09:41:34.771017   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:34.789313   60422 main.go:143] libmachine: Using SSH client type: native
	I1115 09:41:34.789535   60422 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1115 09:41:34.789548   60422 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-209049 && echo "addons-209049" | sudo tee /etc/hostname
	I1115 09:41:34.924771   60422 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-209049
	
	I1115 09:41:34.924859   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:34.942941   60422 main.go:143] libmachine: Using SSH client type: native
	I1115 09:41:34.943193   60422 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1115 09:41:34.943212   60422 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-209049' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-209049/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-209049' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 09:41:35.069588   60422 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 09:41:35.069615   60422 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-55448/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-55448/.minikube}
	I1115 09:41:35.069667   60422 ubuntu.go:190] setting up certificates
	I1115 09:41:35.069685   60422 provision.go:84] configureAuth start
	I1115 09:41:35.069743   60422 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-209049
	I1115 09:41:35.086965   60422 provision.go:143] copyHostCerts
	I1115 09:41:35.087042   60422 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem (1082 bytes)
	I1115 09:41:35.087182   60422 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem (1123 bytes)
	I1115 09:41:35.087244   60422 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem (1679 bytes)
	I1115 09:41:35.087292   60422 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem org=jenkins.addons-209049 san=[127.0.0.1 192.168.49.2 addons-209049 localhost minikube]
	I1115 09:41:35.131093   60422 provision.go:177] copyRemoteCerts
	I1115 09:41:35.131159   60422 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 09:41:35.131200   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:35.148327   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:41:35.242277   60422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 09:41:35.261320   60422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1115 09:41:35.278296   60422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1115 09:41:35.294483   60422 provision.go:87] duration metric: took 224.780726ms to configureAuth
	I1115 09:41:35.294509   60422 ubuntu.go:206] setting minikube options for container-runtime
	I1115 09:41:35.294707   60422 config.go:182] Loaded profile config "addons-209049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:41:35.294829   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:35.311911   60422 main.go:143] libmachine: Using SSH client type: native
	I1115 09:41:35.312155   60422 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1115 09:41:35.312173   60422 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 09:41:35.545004   60422 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 09:41:35.545030   60422 machine.go:97] duration metric: took 3.925607619s to provisionDockerMachine
	I1115 09:41:35.545041   60422 client.go:176] duration metric: took 14.320358011s to LocalClient.Create
	I1115 09:41:35.545060   60422 start.go:167] duration metric: took 14.32043594s to libmachine.API.Create "addons-209049"
	I1115 09:41:35.545069   60422 start.go:293] postStartSetup for "addons-209049" (driver="docker")
	I1115 09:41:35.545077   60422 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 09:41:35.545128   60422 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 09:41:35.545164   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:35.563905   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:41:35.659052   60422 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 09:41:35.662540   60422 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 09:41:35.662572   60422 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 09:41:35.662585   60422 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/addons for local assets ...
	I1115 09:41:35.662651   60422 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/files for local assets ...
	I1115 09:41:35.662686   60422 start.go:296] duration metric: took 117.610467ms for postStartSetup
	I1115 09:41:35.662979   60422 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-209049
	I1115 09:41:35.680394   60422 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/config.json ...
	I1115 09:41:35.680665   60422 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:41:35.680715   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:35.697414   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:41:35.787054   60422 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 09:41:35.791690   60422 start.go:128] duration metric: took 14.569824395s to createHost
	I1115 09:41:35.791719   60422 start.go:83] releasing machines lock for "addons-209049", held for 14.569959309s
	I1115 09:41:35.791803   60422 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-209049
	I1115 09:41:35.809103   60422 ssh_runner.go:195] Run: cat /version.json
	I1115 09:41:35.809168   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:35.809224   60422 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 09:41:35.809296   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:35.827715   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:41:35.828036   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:41:35.973395   60422 ssh_runner.go:195] Run: systemctl --version
	I1115 09:41:35.979728   60422 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 09:41:36.014105   60422 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 09:41:36.018631   60422 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 09:41:36.018683   60422 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 09:41:36.043631   60422 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1115 09:41:36.043656   60422 start.go:496] detecting cgroup driver to use...
	I1115 09:41:36.043696   60422 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 09:41:36.043766   60422 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 09:41:36.059521   60422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 09:41:36.071405   60422 docker.go:218] disabling cri-docker service (if available) ...
	I1115 09:41:36.071480   60422 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 09:41:36.087539   60422 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 09:41:36.104320   60422 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 09:41:36.183377   60422 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 09:41:36.271425   60422 docker.go:234] disabling docker service ...
	I1115 09:41:36.271494   60422 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 09:41:36.290157   60422 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 09:41:36.302345   60422 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 09:41:36.382297   60422 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 09:41:36.462445   60422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 09:41:36.474700   60422 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 09:41:36.488142   60422 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 09:41:36.488201   60422 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:41:36.497835   60422 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 09:41:36.497901   60422 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:41:36.506231   60422 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:41:36.514407   60422 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:41:36.522851   60422 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 09:41:36.530580   60422 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:41:36.538738   60422 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:41:36.551707   60422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:41:36.560099   60422 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 09:41:36.567189   60422 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1115 09:41:36.567236   60422 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1115 09:41:36.579758   60422 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 09:41:36.587718   60422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:41:36.665800   60422 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 09:41:36.770176   60422 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 09:41:36.770267   60422 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 09:41:36.774241   60422 start.go:564] Will wait 60s for crictl version
	I1115 09:41:36.774306   60422 ssh_runner.go:195] Run: which crictl
	I1115 09:41:36.777885   60422 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 09:41:36.801691   60422 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 09:41:36.801788   60422 ssh_runner.go:195] Run: crio --version
	I1115 09:41:36.828550   60422 ssh_runner.go:195] Run: crio --version
	I1115 09:41:36.857364   60422 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 09:41:36.858532   60422 cli_runner.go:164] Run: docker network inspect addons-209049 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 09:41:36.876200   60422 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1115 09:41:36.880333   60422 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:41:36.890204   60422 kubeadm.go:884] updating cluster {Name:addons-209049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-209049 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 09:41:36.890339   60422 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:41:36.890398   60422 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 09:41:36.920882   60422 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 09:41:36.920904   60422 crio.go:433] Images already preloaded, skipping extraction
	I1115 09:41:36.920965   60422 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 09:41:36.946346   60422 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 09:41:36.946371   60422 cache_images.go:86] Images are preloaded, skipping loading
	I1115 09:41:36.946378   60422 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1115 09:41:36.946499   60422 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-209049 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-209049 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 09:41:36.946573   60422 ssh_runner.go:195] Run: crio config
	I1115 09:41:36.991230   60422 cni.go:84] Creating CNI manager for ""
	I1115 09:41:36.991254   60422 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 09:41:36.991273   60422 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 09:41:36.991299   60422 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-209049 NodeName:addons-209049 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 09:41:36.991437   60422 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-209049"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 09:41:36.991510   60422 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 09:41:36.999499   60422 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 09:41:36.999562   60422 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 09:41:37.007024   60422 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1115 09:41:37.019032   60422 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 09:41:37.033364   60422 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1115 09:41:37.045332   60422 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1115 09:41:37.048872   60422 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:41:37.058308   60422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:41:37.135472   60422 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:41:37.162540   60422 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049 for IP: 192.168.49.2
	I1115 09:41:37.162563   60422 certs.go:195] generating shared ca certs ...
	I1115 09:41:37.162584   60422 certs.go:227] acquiring lock for ca certs: {Name:mkec8c0ab2b564f4fafe1f7fc86089efa994f6db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:41:37.162747   60422 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key
	I1115 09:41:37.672914   60422 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt ...
	I1115 09:41:37.672959   60422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt: {Name:mk79b8053ded3a30f80aec48e33e9cc288cf87ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:41:37.673176   60422 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key ...
	I1115 09:41:37.673193   60422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key: {Name:mk23e892d9a40c4ce81a499215fe2d80e5697a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:41:37.673307   60422 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key
	I1115 09:41:37.781907   60422 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.crt ...
	I1115 09:41:37.781940   60422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.crt: {Name:mk3468db0c2dcd84d5f98fba6ac770ec71e6a9a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:41:37.782154   60422 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key ...
	I1115 09:41:37.782174   60422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key: {Name:mk3f23db79dc681ebc09dd50cd76cdc2cd124f5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:41:37.782285   60422 certs.go:257] generating profile certs ...
	I1115 09:41:37.782367   60422 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/client.key
	I1115 09:41:37.782390   60422 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/client.crt with IP's: []
	I1115 09:41:37.798013   60422 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/client.crt ...
	I1115 09:41:37.798033   60422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/client.crt: {Name:mk9f8a06e150e4ab615cfeb860a4df3cd046bcd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:41:37.798186   60422 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/client.key ...
	I1115 09:41:37.798200   60422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/client.key: {Name:mke2a2122d4c3934eb296ff2f0d02b9e826c7efe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:41:37.798296   60422 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/apiserver.key.294b6e2f
	I1115 09:41:37.798316   60422 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/apiserver.crt.294b6e2f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1115 09:41:38.338641   60422 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/apiserver.crt.294b6e2f ...
	I1115 09:41:38.338680   60422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/apiserver.crt.294b6e2f: {Name:mk7a081e4c61e7aa3e99ef74af10d2ab8744cf45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:41:38.338908   60422 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/apiserver.key.294b6e2f ...
	I1115 09:41:38.338929   60422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/apiserver.key.294b6e2f: {Name:mk337f05ec6a0f43baacca048181e76f39edf618 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:41:38.339074   60422 certs.go:382] copying /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/apiserver.crt.294b6e2f -> /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/apiserver.crt
	I1115 09:41:38.339182   60422 certs.go:386] copying /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/apiserver.key.294b6e2f -> /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/apiserver.key
	I1115 09:41:38.339243   60422 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/proxy-client.key
	I1115 09:41:38.339265   60422 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/proxy-client.crt with IP's: []
	I1115 09:41:38.520170   60422 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/proxy-client.crt ...
	I1115 09:41:38.520204   60422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/proxy-client.crt: {Name:mkde5c047e033a4775bfd74c851129f97f744b64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:41:38.520408   60422 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/proxy-client.key ...
	I1115 09:41:38.520425   60422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/proxy-client.key: {Name:mkccf08a52ea888ff6354e4fc12e5615be3a0451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:41:38.520633   60422 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 09:41:38.520669   60422 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem (1082 bytes)
	I1115 09:41:38.520704   60422 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem (1123 bytes)
	I1115 09:41:38.520731   60422 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem (1679 bytes)
	I1115 09:41:38.521370   60422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 09:41:38.539153   60422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 09:41:38.555926   60422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 09:41:38.572674   60422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 09:41:38.589496   60422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1115 09:41:38.606693   60422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 09:41:38.623703   60422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 09:41:38.640357   60422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 09:41:38.657318   60422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 09:41:38.676071   60422 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 09:41:38.687871   60422 ssh_runner.go:195] Run: openssl version
	I1115 09:41:38.693597   60422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 09:41:38.703432   60422 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:41:38.706982   60422 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:41 /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:41:38.707037   60422 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:41:38.742177   60422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 09:41:38.751621   60422 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 09:41:38.755384   60422 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 09:41:38.755459   60422 kubeadm.go:401] StartCluster: {Name:addons-209049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-209049 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:41:38.755551   60422 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:41:38.755608   60422 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:41:38.781921   60422 cri.go:89] found id: ""
	I1115 09:41:38.782001   60422 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 09:41:38.790481   60422 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 09:41:38.798156   60422 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 09:41:38.798214   60422 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 09:41:38.805518   60422 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 09:41:38.805538   60422 kubeadm.go:158] found existing configuration files:
	
	I1115 09:41:38.805587   60422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 09:41:38.812827   60422 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 09:41:38.812867   60422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 09:41:38.819744   60422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 09:41:38.827139   60422 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 09:41:38.827186   60422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 09:41:38.834190   60422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 09:41:38.841531   60422 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 09:41:38.841575   60422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 09:41:38.848598   60422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 09:41:38.855883   60422 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 09:41:38.855929   60422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 09:41:38.863051   60422 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 09:41:38.898909   60422 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 09:41:38.899027   60422 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 09:41:38.918107   60422 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 09:41:38.918223   60422 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1115 09:41:38.918299   60422 kubeadm.go:319] OS: Linux
	I1115 09:41:38.918361   60422 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 09:41:38.918420   60422 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1115 09:41:38.918483   60422 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 09:41:38.918546   60422 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 09:41:38.918608   60422 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 09:41:38.918673   60422 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 09:41:38.918744   60422 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 09:41:38.918820   60422 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 09:41:38.918892   60422 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1115 09:41:38.973717   60422 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 09:41:38.973839   60422 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 09:41:38.973982   60422 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 09:41:38.981938   60422 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1115 09:41:38.983969   60422 out.go:252]   - Generating certificates and keys ...
	I1115 09:41:38.984052   60422 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 09:41:38.984138   60422 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 09:41:39.424827   60422 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 09:41:39.676914   60422 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 09:41:39.737304   60422 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 09:41:40.363199   60422 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 09:41:40.632277   60422 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 09:41:40.632464   60422 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-209049 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1115 09:41:40.894113   60422 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 09:41:40.894329   60422 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-209049 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1115 09:41:41.068199   60422 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 09:41:41.143395   60422 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 09:41:41.351299   60422 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 09:41:41.351384   60422 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 09:41:41.495618   60422 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 09:41:41.794787   60422 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 09:41:41.877840   60422 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 09:41:42.061273   60422 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 09:41:42.443831   60422 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 09:41:42.444327   60422 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 09:41:42.448008   60422 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1115 09:41:42.449533   60422 out.go:252]   - Booting up control plane ...
	I1115 09:41:42.449677   60422 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 09:41:42.449808   60422 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 09:41:42.450391   60422 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 09:41:42.478151   60422 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 09:41:42.478294   60422 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1115 09:41:42.485496   60422 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1115 09:41:42.485773   60422 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 09:41:42.485843   60422 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 09:41:42.579049   60422 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1115 09:41:42.579215   60422 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1115 09:41:43.580864   60422 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00193317s
	I1115 09:41:43.584729   60422 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 09:41:43.584857   60422 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1115 09:41:43.585006   60422 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 09:41:43.585148   60422 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1115 09:41:46.699762   60422 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.115076181s
	I1115 09:41:46.993159   60422 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.408389221s
	I1115 09:41:48.086752   60422 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501992652s
	I1115 09:41:48.097615   60422 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 09:41:48.107369   60422 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 09:41:48.117662   60422 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 09:41:48.117995   60422 kubeadm.go:319] [mark-control-plane] Marking the node addons-209049 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 09:41:48.125947   60422 kubeadm.go:319] [bootstrap-token] Using token: 58g2md.y7ij3qnn2hkj3vkt
	I1115 09:41:48.127285   60422 out.go:252]   - Configuring RBAC rules ...
	I1115 09:41:48.127467   60422 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 09:41:48.131562   60422 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 09:41:48.136141   60422 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 09:41:48.138531   60422 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 09:41:48.140770   60422 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 09:41:48.143900   60422 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 09:41:48.492025   60422 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 09:41:48.905759   60422 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 09:41:49.491940   60422 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 09:41:49.492896   60422 kubeadm.go:319] 
	I1115 09:41:49.493024   60422 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 09:41:49.493050   60422 kubeadm.go:319] 
	I1115 09:41:49.493120   60422 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 09:41:49.493144   60422 kubeadm.go:319] 
	I1115 09:41:49.493194   60422 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 09:41:49.493287   60422 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 09:41:49.493366   60422 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 09:41:49.493377   60422 kubeadm.go:319] 
	I1115 09:41:49.493472   60422 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 09:41:49.493481   60422 kubeadm.go:319] 
	I1115 09:41:49.493538   60422 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 09:41:49.493563   60422 kubeadm.go:319] 
	I1115 09:41:49.493641   60422 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 09:41:49.493733   60422 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 09:41:49.493834   60422 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 09:41:49.493843   60422 kubeadm.go:319] 
	I1115 09:41:49.493938   60422 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 09:41:49.494064   60422 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 09:41:49.494072   60422 kubeadm.go:319] 
	I1115 09:41:49.494199   60422 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 58g2md.y7ij3qnn2hkj3vkt \
	I1115 09:41:49.494292   60422 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c958101d3a67868123c5e439a6c1ea6e30a99a6538b76cbe461e2c919cf1aec \
	I1115 09:41:49.494313   60422 kubeadm.go:319] 	--control-plane 
	I1115 09:41:49.494317   60422 kubeadm.go:319] 
	I1115 09:41:49.494387   60422 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 09:41:49.494393   60422 kubeadm.go:319] 
	I1115 09:41:49.494465   60422 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 58g2md.y7ij3qnn2hkj3vkt \
	I1115 09:41:49.494590   60422 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c958101d3a67868123c5e439a6c1ea6e30a99a6538b76cbe461e2c919cf1aec 
	I1115 09:41:49.496050   60422 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1115 09:41:49.496295   60422 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1115 09:41:49.496396   60422 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 09:41:49.496418   60422 cni.go:84] Creating CNI manager for ""
	I1115 09:41:49.496433   60422 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 09:41:49.497889   60422 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1115 09:41:49.498919   60422 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 09:41:49.503298   60422 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1115 09:41:49.503314   60422 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 09:41:49.516421   60422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 09:41:49.715070   60422 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 09:41:49.715187   60422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:41:49.715211   60422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-209049 minikube.k8s.io/updated_at=2025_11_15T09_41_49_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510 minikube.k8s.io/name=addons-209049 minikube.k8s.io/primary=true
	I1115 09:41:49.796317   60422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:41:49.796394   60422 ops.go:34] apiserver oom_adj: -16
	I1115 09:41:50.296480   60422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:41:50.797319   60422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:41:51.296431   60422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:41:51.797394   60422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:41:52.296412   60422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:41:52.796930   60422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:41:53.296718   60422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:41:53.797164   60422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:41:54.296463   60422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:41:54.796386   60422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:41:55.297152   60422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:41:55.389208   60422 kubeadm.go:1114] duration metric: took 5.674113376s to wait for elevateKubeSystemPrivileges
	I1115 09:41:55.389268   60422 kubeadm.go:403] duration metric: took 16.633804184s to StartCluster
	I1115 09:41:55.389294   60422 settings.go:142] acquiring lock: {Name:mk94cfac0f6eef2479181468a7a8082a6e5c8f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:41:55.389438   60422 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 09:41:55.390095   60422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:41:55.390335   60422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 09:41:55.390384   60422 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 09:41:55.390589   60422 config.go:182] Loaded profile config "addons-209049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:41:55.390428   60422 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1115 09:41:55.390674   60422 addons.go:70] Setting yakd=true in profile "addons-209049"
	I1115 09:41:55.390691   60422 addons.go:239] Setting addon yakd=true in "addons-209049"
	I1115 09:41:55.390713   60422 addons.go:70] Setting inspektor-gadget=true in profile "addons-209049"
	I1115 09:41:55.390733   60422 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:41:55.390738   60422 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-209049"
	I1115 09:41:55.390747   60422 addons.go:70] Setting gcp-auth=true in profile "addons-209049"
	I1115 09:41:55.390755   60422 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-209049"
	I1115 09:41:55.390770   60422 mustload.go:66] Loading cluster: addons-209049
	I1115 09:41:55.390787   60422 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:41:55.390808   60422 addons.go:70] Setting registry-creds=true in profile "addons-209049"
	I1115 09:41:55.390803   60422 addons.go:70] Setting ingress=true in profile "addons-209049"
	I1115 09:41:55.390808   60422 addons.go:70] Setting ingress-dns=true in profile "addons-209049"
	I1115 09:41:55.390850   60422 addons.go:239] Setting addon registry-creds=true in "addons-209049"
	I1115 09:41:55.390853   60422 addons.go:239] Setting addon ingress=true in "addons-209049"
	I1115 09:41:55.390872   60422 addons.go:239] Setting addon ingress-dns=true in "addons-209049"
	I1115 09:41:55.390877   60422 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-209049"
	I1115 09:41:55.390896   60422 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-209049"
	I1115 09:41:55.390902   60422 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:41:55.390913   60422 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:41:55.390919   60422 addons.go:70] Setting cloud-spanner=true in profile "addons-209049"
	I1115 09:41:55.391616   60422 addons.go:70] Setting volumesnapshots=true in profile "addons-209049"
	I1115 09:41:55.391637   60422 addons.go:70] Setting metrics-server=true in profile "addons-209049"
	I1115 09:41:55.391644   60422 addons.go:239] Setting addon volumesnapshots=true in "addons-209049"
	I1115 09:41:55.391657   60422 addons.go:239] Setting addon metrics-server=true in "addons-209049"
	I1115 09:41:55.391670   60422 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:41:55.391689   60422 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:41:55.392080   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:55.392105   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:55.392216   60422 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-209049"
	I1115 09:41:55.392292   60422 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-209049"
	I1115 09:41:55.392295   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:55.392315   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:55.392326   60422 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:41:55.392912   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:55.394064   60422 addons.go:70] Setting volcano=true in profile "addons-209049"
	I1115 09:41:55.394137   60422 addons.go:239] Setting addon volcano=true in "addons-209049"
	I1115 09:41:55.394190   60422 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:41:55.394637   60422 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:41:55.394785   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:55.390923   60422 addons.go:70] Setting registry=true in profile "addons-209049"
	I1115 09:41:55.394985   60422 addons.go:239] Setting addon registry=true in "addons-209049"
	I1115 09:41:55.395019   60422 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:41:55.395080   60422 out.go:179] * Verifying Kubernetes components...
	I1115 09:41:55.391622   60422 addons.go:239] Setting addon cloud-spanner=true in "addons-209049"
	I1115 09:41:55.390731   60422 addons.go:70] Setting default-storageclass=true in profile "addons-209049"
	I1115 09:41:55.391328   60422 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:41:55.391418   60422 config.go:182] Loaded profile config "addons-209049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:41:55.395381   60422 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-209049"
	I1115 09:41:55.395466   60422 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-209049"
	I1115 09:41:55.395916   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:55.396530   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:55.396893   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:55.397360   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:55.397432   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:55.390741   60422 addons.go:239] Setting addon inspektor-gadget=true in "addons-209049"
	I1115 09:41:55.397663   60422 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:41:55.397681   60422 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-209049"
	I1115 09:41:55.398038   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:55.398479   60422 addons.go:70] Setting storage-provisioner=true in profile "addons-209049"
	I1115 09:41:55.398498   60422 addons.go:239] Setting addon storage-provisioner=true in "addons-209049"
	I1115 09:41:55.398559   60422 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:41:55.397663   60422 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:41:55.401018   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:55.401582   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:55.401164   60422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:41:55.411817   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:55.415184   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:55.419158   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:55.449337   60422 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1115 09:41:55.451004   60422 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1115 09:41:55.451031   60422 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1115 09:41:55.451109   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	W1115 09:41:55.453313   60422 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1115 09:41:55.453826   60422 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-209049"
	I1115 09:41:55.454315   60422 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:41:55.455670   60422 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1115 09:41:55.457657   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:55.457786   60422 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1115 09:41:55.458054   60422 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1115 09:41:55.458697   60422 out.go:179]   - Using image docker.io/registry:3.0.0
	I1115 09:41:55.459005   60422 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1115 09:41:55.459022   60422 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1115 09:41:55.459091   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:55.460229   60422 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1115 09:41:55.460248   60422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1115 09:41:55.460298   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:55.460664   60422 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1115 09:41:55.460678   60422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1115 09:41:55.460724   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:55.468434   60422 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1115 09:41:55.472199   60422 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1115 09:41:55.472223   60422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1115 09:41:55.472284   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:55.472640   60422 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1115 09:41:55.472946   60422 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1115 09:41:55.475740   60422 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1115 09:41:55.480079   60422 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1115 09:41:55.480102   60422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1115 09:41:55.480164   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:55.480337   60422 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1115 09:41:55.481393   60422 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1115 09:41:55.482001   60422 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1115 09:41:55.482365   60422 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1115 09:41:55.483329   60422 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1115 09:41:55.483681   60422 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1115 09:41:55.483701   60422 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1115 09:41:55.483766   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:55.484179   60422 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1115 09:41:55.484637   60422 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1115 09:41:55.484684   60422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1115 09:41:55.484771   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:55.489997   60422 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1115 09:41:55.494874   60422 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1115 09:41:55.496043   60422 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1115 09:41:55.497014   60422 addons.go:239] Setting addon default-storageclass=true in "addons-209049"
	I1115 09:41:55.497062   60422 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:41:55.497634   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:41:55.498806   60422 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1115 09:41:55.500136   60422 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1115 09:41:55.500156   60422 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1115 09:41:55.500220   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:55.504654   60422 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1115 09:41:55.510005   60422 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1115 09:41:55.510589   60422 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1115 09:41:55.510610   60422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1115 09:41:55.510677   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:55.511054   60422 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 09:41:55.511158   60422 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1115 09:41:55.512808   60422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1115 09:41:55.512993   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:55.512015   60422 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1115 09:41:55.512259   60422 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:41:55.514938   60422 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1115 09:41:55.515025   60422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1115 09:41:55.515103   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:55.517667   60422 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 09:41:55.517687   60422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 09:41:55.517739   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:55.525379   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:41:55.527791   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:41:55.539023   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:41:55.541746   60422 out.go:179]   - Using image docker.io/busybox:stable
	I1115 09:41:55.543662   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:41:55.545082   60422 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1115 09:41:55.547169   60422 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 09:41:55.547247   60422 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 09:41:55.547340   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:55.548077   60422 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1115 09:41:55.548099   60422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1115 09:41:55.548154   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:41:55.558621   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:41:55.561849   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:41:55.564172   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:41:55.564256   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:41:55.564249   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:41:55.564615   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:41:55.566831   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:41:55.569516   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:41:55.571347   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:41:55.578150   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:41:55.579244   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:41:55.585289   60422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 09:41:55.895491   60422 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:41:56.094010   60422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1115 09:41:56.097088   60422 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1115 09:41:56.097115   60422 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1115 09:41:56.178668   60422 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1115 09:41:56.178717   60422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1115 09:41:56.179962   60422 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1115 09:41:56.179988   60422 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1115 09:41:56.180608   60422 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1115 09:41:56.180628   60422 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1115 09:41:56.288694   60422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1115 09:41:56.289065   60422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 09:41:56.289584   60422 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1115 09:41:56.289601   60422 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1115 09:41:56.294994   60422 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1115 09:41:56.295014   60422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1115 09:41:56.375368   60422 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1115 09:41:56.375396   60422 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1115 09:41:56.375863   60422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1115 09:41:56.378195   60422 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1115 09:41:56.378220   60422 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1115 09:41:56.379691   60422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1115 09:41:56.380508   60422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1115 09:41:56.388214   60422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1115 09:41:56.388378   60422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1115 09:41:56.388950   60422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 09:41:56.395686   60422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1115 09:41:56.478334   60422 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1115 09:41:56.478398   60422 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1115 09:41:56.486360   60422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1115 09:41:56.489039   60422 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1115 09:41:56.489066   60422 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1115 09:41:56.576754   60422 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1115 09:41:56.576787   60422 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1115 09:41:56.579230   60422 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1115 09:41:56.579299   60422 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1115 09:41:56.683443   60422 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1115 09:41:56.683472   60422 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1115 09:41:56.774049   60422 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1115 09:41:56.774207   60422 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1115 09:41:56.782835   60422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1115 09:41:56.790923   60422 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1115 09:41:56.790944   60422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1115 09:41:56.893760   60422 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1115 09:41:56.893801   60422 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1115 09:41:56.992642   60422 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1115 09:41:56.992708   60422 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1115 09:41:56.994517   60422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1115 09:41:57.184243   60422 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1115 09:41:57.184274   60422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1115 09:41:57.280597   60422 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1115 09:41:57.280626   60422 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1115 09:41:57.397347   60422 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.811959724s)
	I1115 09:41:57.397386   60422 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1115 09:41:57.398639   60422 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.503112765s)
	I1115 09:41:57.399304   60422 node_ready.go:35] waiting up to 6m0s for node "addons-209049" to be "Ready" ...
	I1115 09:41:57.476166   60422 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1115 09:41:57.476199   60422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1115 09:41:57.479794   60422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1115 09:41:57.789927   60422 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1115 09:41:57.789972   60422 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1115 09:41:57.983339   60422 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-209049" context rescaled to 1 replicas
	I1115 09:41:58.174206   60422 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1115 09:41:58.174243   60422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1115 09:41:58.381746   60422 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1115 09:41:58.381776   60422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1115 09:41:58.486451   60422 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1115 09:41:58.486489   60422 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1115 09:41:58.496520   60422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.207419539s)
	I1115 09:41:58.496897   60422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.402848077s)
	I1115 09:41:58.787727   60422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1115 09:41:59.477190   60422 node_ready.go:57] node "addons-209049" has "Ready":"False" status (will retry)
	I1115 09:42:00.305856   60422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.017116874s)
	I1115 09:42:00.978803   60422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.602896906s)
	I1115 09:42:00.978845   60422 addons.go:480] Verifying addon ingress=true in "addons-209049"
	I1115 09:42:00.978898   60422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.598355525s)
	I1115 09:42:00.978930   60422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.59921401s)
	I1115 09:42:00.979036   60422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.590801259s)
	I1115 09:42:00.979116   60422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.590712497s)
	I1115 09:42:00.979171   60422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.590190642s)
	I1115 09:42:00.979223   60422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.583510046s)
	I1115 09:42:00.979279   60422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.492887332s)
	I1115 09:42:00.979297   60422 addons.go:480] Verifying addon registry=true in "addons-209049"
	I1115 09:42:00.979366   60422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.1964442s)
	I1115 09:42:00.979393   60422 addons.go:480] Verifying addon metrics-server=true in "addons-209049"
	I1115 09:42:00.979441   60422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.984894825s)
	I1115 09:42:00.981531   60422 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-209049 service yakd-dashboard -n yakd-dashboard
	
	I1115 09:42:00.981553   60422 out.go:179] * Verifying ingress addon...
	I1115 09:42:00.981556   60422 out.go:179] * Verifying registry addon...
	I1115 09:42:00.984240   60422 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1115 09:42:00.984257   60422 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1115 09:42:00.987026   60422 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1115 09:42:00.987045   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:00.987149   60422 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1115 09:42:00.987175   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:01.487816   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:01.492609   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:01.786212   60422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.306367581s)
	W1115 09:42:01.786279   60422 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1115 09:42:01.786314   60422 retry.go:31] will retry after 168.267252ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1115 09:42:01.786427   60422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.998579651s)
	I1115 09:42:01.786473   60422 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-209049"
	I1115 09:42:01.787853   60422 out.go:179] * Verifying csi-hostpath-driver addon...
	I1115 09:42:01.789782   60422 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1115 09:42:01.792883   60422 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1115 09:42:01.792981   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1115 09:42:01.903639   60422 node_ready.go:57] node "addons-209049" has "Ready":"False" status (will retry)
	I1115 09:42:01.955044   60422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1115 09:42:01.987560   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:01.987614   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:02.293814   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:02.487734   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:02.487817   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:02.793281   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:02.988176   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:02.988235   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:03.121182   60422 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1115 09:42:03.121262   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:42:03.138990   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:42:03.238301   60422 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1115 09:42:03.250724   60422 addons.go:239] Setting addon gcp-auth=true in "addons-209049"
	I1115 09:42:03.250788   60422 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:42:03.251216   60422 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:42:03.268602   60422 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1115 09:42:03.268664   60422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:42:03.286048   60422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:42:03.293947   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:03.487742   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:03.487945   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:03.792707   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:03.987264   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:03.987434   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:04.293399   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1115 09:42:04.402275   60422 node_ready.go:57] node "addons-209049" has "Ready":"False" status (will retry)
	I1115 09:42:04.483112   60422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.528015789s)
	I1115 09:42:04.483233   60422 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.214599353s)
	I1115 09:42:04.485015   60422 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1115 09:42:04.486250   60422 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1115 09:42:04.487325   60422 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1115 09:42:04.487365   60422 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1115 09:42:04.488288   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:04.488531   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:04.500931   60422 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1115 09:42:04.500971   60422 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1115 09:42:04.513566   60422 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1115 09:42:04.513591   60422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1115 09:42:04.526603   60422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1115 09:42:04.793100   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:04.878809   60422 addons.go:480] Verifying addon gcp-auth=true in "addons-209049"
	I1115 09:42:04.880181   60422 out.go:179] * Verifying gcp-auth addon...
	I1115 09:42:04.881911   60422 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1115 09:42:04.893931   60422 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1115 09:42:04.893973   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:04.987779   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:04.987966   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:05.292800   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:05.385436   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:05.487744   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:05.488089   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:05.792464   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:05.885025   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:05.987045   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:05.987257   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:06.293224   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:06.384834   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:42:06.402333   60422 node_ready.go:57] node "addons-209049" has "Ready":"False" status (will retry)
	I1115 09:42:06.486990   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:06.487237   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:06.793502   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:06.885167   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:06.987904   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:06.988063   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:07.293388   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:07.385226   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:07.487682   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:07.487937   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:07.792773   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:07.885474   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:07.987244   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:07.987635   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:08.293412   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:08.385280   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:42:08.402997   60422 node_ready.go:57] node "addons-209049" has "Ready":"False" status (will retry)
	I1115 09:42:08.488282   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:08.488535   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:08.793325   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:08.885174   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:08.987350   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:08.987527   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:09.293738   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:09.385463   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:09.487029   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:09.487085   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:09.792849   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:09.885623   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:09.987689   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:09.987856   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:10.292719   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:10.385462   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:10.487753   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:10.488002   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:10.792606   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:10.885354   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:42:10.902574   60422 node_ready.go:57] node "addons-209049" has "Ready":"False" status (will retry)
	I1115 09:42:10.987212   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:10.987435   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:11.293571   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:11.385338   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:11.487739   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:11.487796   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:11.792425   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:11.885071   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:11.987245   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:11.987311   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:12.293270   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:12.384808   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:12.486998   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:12.487151   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:12.792998   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:12.884470   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:12.987408   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:12.987647   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:13.293884   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:13.385724   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:42:13.402100   60422 node_ready.go:57] node "addons-209049" has "Ready":"False" status (will retry)
	I1115 09:42:13.487928   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:13.488025   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:13.792596   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:13.885268   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:13.986902   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:13.987013   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:14.292636   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:14.385265   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:14.487694   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:14.487917   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:14.792730   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:14.885514   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:14.987701   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:14.987863   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:15.292705   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:15.385332   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:42:15.402741   60422 node_ready.go:57] node "addons-209049" has "Ready":"False" status (will retry)
	I1115 09:42:15.487585   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:15.487751   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:15.792518   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:15.885070   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:15.987246   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:15.987476   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:16.293555   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:16.385230   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:16.487343   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:16.487413   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:16.793260   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:16.884862   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:16.988852   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:16.988908   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:17.292569   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:17.385039   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:17.487226   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:17.487475   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:17.793580   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:17.885042   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:42:17.902320   60422 node_ready.go:57] node "addons-209049" has "Ready":"False" status (will retry)
	I1115 09:42:17.987229   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:17.987341   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:18.293454   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:18.385308   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:18.487314   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:18.487332   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:18.793125   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:18.885347   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:18.987326   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:18.987519   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:19.293441   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:19.385223   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:19.487595   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:19.487676   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:19.793940   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:19.885633   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:19.987618   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:19.987774   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:20.292599   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:20.385438   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:42:20.401732   60422 node_ready.go:57] node "addons-209049" has "Ready":"False" status (will retry)
	I1115 09:42:20.487310   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:20.487538   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:20.793492   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:20.885104   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:20.986845   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:20.986977   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:21.293161   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:21.384938   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:21.486781   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:21.487010   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:21.792873   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:21.885441   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:21.987539   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:21.987712   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:22.292496   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:22.385279   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:42:22.403122   60422 node_ready.go:57] node "addons-209049" has "Ready":"False" status (will retry)
	I1115 09:42:22.487541   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:22.487694   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:22.793342   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:22.884821   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:22.986878   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:22.987114   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:23.293282   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:23.385082   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:23.487320   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:23.487519   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:23.793334   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:23.885298   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:23.987255   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:23.987393   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:24.293339   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:24.385259   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:24.487433   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:24.487578   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:24.793504   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:24.885456   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:42:24.902746   60422 node_ready.go:57] node "addons-209049" has "Ready":"False" status (will retry)
	I1115 09:42:24.987586   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:24.987814   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:25.292437   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:25.385420   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:25.487680   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:25.487833   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:25.792421   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:25.884871   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:25.987294   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:25.987526   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:26.293471   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:26.385109   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:26.487125   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:26.487370   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:26.792911   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:26.885870   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:26.987973   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:26.988084   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:27.292601   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:27.385546   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:42:27.402078   60422 node_ready.go:57] node "addons-209049" has "Ready":"False" status (will retry)
	I1115 09:42:27.487662   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:27.487902   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:27.792691   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:27.885647   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:27.987511   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:27.987744   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:28.294172   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:28.384820   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:28.486771   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:28.486945   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:28.792412   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:28.885169   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:28.986896   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:28.987069   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:29.292829   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:29.385477   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:29.487655   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:29.487723   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:29.792766   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:29.885604   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:42:29.902129   60422 node_ready.go:57] node "addons-209049" has "Ready":"False" status (will retry)
	I1115 09:42:29.987569   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:29.989486   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:30.292625   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:30.385361   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:30.487467   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:30.487593   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:30.793244   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:30.885106   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:30.987062   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:30.987211   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:31.293257   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:31.385122   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:31.487359   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:31.487621   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:31.793528   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:31.885266   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:42:31.902906   60422 node_ready.go:57] node "addons-209049" has "Ready":"False" status (will retry)
	I1115 09:42:31.987708   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:31.987819   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:32.292628   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:32.385399   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:32.487560   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:32.487722   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:32.792541   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:32.885123   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:32.987558   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:32.987702   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:33.293121   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:33.386834   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:33.487357   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:33.487641   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:33.793332   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:33.885314   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:33.987255   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:33.987418   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:34.293063   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:34.384713   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:42:34.402197   60422 node_ready.go:57] node "addons-209049" has "Ready":"False" status (will retry)
	I1115 09:42:34.487802   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:34.488046   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:34.792611   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:34.885137   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:34.987270   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:34.987457   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:35.293354   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:35.385153   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:35.487633   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:35.487828   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:35.792827   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:35.885648   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:35.987671   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:35.987796   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:36.294037   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:36.385421   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:36.474535   60422 node_ready.go:49] node "addons-209049" is "Ready"
	I1115 09:42:36.474566   60422 node_ready.go:38] duration metric: took 39.075238511s for node "addons-209049" to be "Ready" ...
	I1115 09:42:36.474588   60422 api_server.go:52] waiting for apiserver process to appear ...
	I1115 09:42:36.474656   60422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:42:36.488337   60422 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1115 09:42:36.488362   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:36.488710   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:36.493611   60422 api_server.go:72] duration metric: took 41.103191335s to wait for apiserver process to appear ...
	I1115 09:42:36.493639   60422 api_server.go:88] waiting for apiserver healthz status ...
	I1115 09:42:36.493657   60422 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 09:42:36.498219   60422 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1115 09:42:36.499097   60422 api_server.go:141] control plane version: v1.34.1
	I1115 09:42:36.499126   60422 api_server.go:131] duration metric: took 5.478642ms to wait for apiserver health ...
	I1115 09:42:36.499163   60422 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 09:42:36.505817   60422 system_pods.go:59] 20 kube-system pods found
	I1115 09:42:36.505849   60422 system_pods.go:61] "amd-gpu-device-plugin-zxglt" [2d4f6e85-2a3f-4fc9-85de-7a92b0cc0241] Pending
	I1115 09:42:36.505857   60422 system_pods.go:61] "coredns-66bc5c9577-4xn7s" [b5451c3e-5e31-46fc-aca6-1c24946da89a] Pending
	I1115 09:42:36.505863   60422 system_pods.go:61] "csi-hostpath-attacher-0" [75eb60d9-894a-4839-a716-6bfb2f15783b] Pending
	I1115 09:42:36.505869   60422 system_pods.go:61] "csi-hostpath-resizer-0" [557b0a30-f93a-4371-bd28-7ce29fe0f42f] Pending
	I1115 09:42:36.505874   60422 system_pods.go:61] "csi-hostpathplugin-n2grt" [1e55d345-4538-4c11-9d5d-27e7a8026753] Pending
	I1115 09:42:36.505879   60422 system_pods.go:61] "etcd-addons-209049" [dbb94784-4512-4f24-8198-509c45540ddc] Running
	I1115 09:42:36.505884   60422 system_pods.go:61] "kindnet-p4lm7" [99c22a89-4507-418b-bf45-7c29147f179f] Running
	I1115 09:42:36.505889   60422 system_pods.go:61] "kube-apiserver-addons-209049" [db7797f5-d9d4-4fd1-bd3e-ca7cf178a6ed] Running
	I1115 09:42:36.505893   60422 system_pods.go:61] "kube-controller-manager-addons-209049" [1a32ce38-b5b0-481d-980a-42f5e37030cb] Running
	I1115 09:42:36.505910   60422 system_pods.go:61] "kube-ingress-dns-minikube" [48982f43-9ec1-4e9d-a65c-a93e59979a96] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 09:42:36.505920   60422 system_pods.go:61] "kube-proxy-vkr7k" [e066d5c7-375f-467b-af24-507d2a8393bb] Running
	I1115 09:42:36.505928   60422 system_pods.go:61] "kube-scheduler-addons-209049" [e8762fe1-3552-421f-8cd8-0e11d7f817a7] Running
	I1115 09:42:36.505938   60422 system_pods.go:61] "metrics-server-85b7d694d7-sgjrz" [72926a73-00e6-4532-92f6-7db18c63f53f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 09:42:36.505944   60422 system_pods.go:61] "nvidia-device-plugin-daemonset-qtrg4" [2ece1371-7279-4f25-ad2d-270518aadb18] Pending
	I1115 09:42:36.505969   60422 system_pods.go:61] "registry-6b586f9694-fwbg5" [83b0e192-16e8-40e2-9ff5-a5964957dc8a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 09:42:36.505982   60422 system_pods.go:61] "registry-creds-764b6fb674-d6rh5" [e3dfc52f-3ad1-4877-8af4-d96c6b46d6a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 09:42:36.505987   60422 system_pods.go:61] "registry-proxy-xzbqg" [bcc22f7e-dc0a-4976-9707-3acbdc5421f3] Pending
	I1115 09:42:36.505993   60422 system_pods.go:61] "snapshot-controller-7d9fbc56b8-blfn9" [074c2f9d-00a5-4f0d-a757-4cd3d19884a2] Pending
	I1115 09:42:36.506001   60422 system_pods.go:61] "snapshot-controller-7d9fbc56b8-mqtdg" [68c6aac6-c7a8-43e8-aeea-1a1bc2a3ffe7] Pending
	I1115 09:42:36.506008   60422 system_pods.go:61] "storage-provisioner" [5f3882a6-a000-4050-86f4-5d30ce6faeff] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:42:36.506016   60422 system_pods.go:74] duration metric: took 6.842415ms to wait for pod list to return data ...
	I1115 09:42:36.506030   60422 default_sa.go:34] waiting for default service account to be created ...
	I1115 09:42:36.507923   60422 default_sa.go:45] found service account: "default"
	I1115 09:42:36.507945   60422 default_sa.go:55] duration metric: took 1.908701ms for default service account to be created ...
	I1115 09:42:36.507975   60422 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 09:42:36.515011   60422 system_pods.go:86] 20 kube-system pods found
	I1115 09:42:36.515046   60422 system_pods.go:89] "amd-gpu-device-plugin-zxglt" [2d4f6e85-2a3f-4fc9-85de-7a92b0cc0241] Pending
	I1115 09:42:36.515055   60422 system_pods.go:89] "coredns-66bc5c9577-4xn7s" [b5451c3e-5e31-46fc-aca6-1c24946da89a] Pending
	I1115 09:42:36.515061   60422 system_pods.go:89] "csi-hostpath-attacher-0" [75eb60d9-894a-4839-a716-6bfb2f15783b] Pending
	I1115 09:42:36.515073   60422 system_pods.go:89] "csi-hostpath-resizer-0" [557b0a30-f93a-4371-bd28-7ce29fe0f42f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1115 09:42:36.515079   60422 system_pods.go:89] "csi-hostpathplugin-n2grt" [1e55d345-4538-4c11-9d5d-27e7a8026753] Pending
	I1115 09:42:36.515087   60422 system_pods.go:89] "etcd-addons-209049" [dbb94784-4512-4f24-8198-509c45540ddc] Running
	I1115 09:42:36.515093   60422 system_pods.go:89] "kindnet-p4lm7" [99c22a89-4507-418b-bf45-7c29147f179f] Running
	I1115 09:42:36.515099   60422 system_pods.go:89] "kube-apiserver-addons-209049" [db7797f5-d9d4-4fd1-bd3e-ca7cf178a6ed] Running
	I1115 09:42:36.515105   60422 system_pods.go:89] "kube-controller-manager-addons-209049" [1a32ce38-b5b0-481d-980a-42f5e37030cb] Running
	I1115 09:42:36.515122   60422 system_pods.go:89] "kube-ingress-dns-minikube" [48982f43-9ec1-4e9d-a65c-a93e59979a96] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 09:42:36.515128   60422 system_pods.go:89] "kube-proxy-vkr7k" [e066d5c7-375f-467b-af24-507d2a8393bb] Running
	I1115 09:42:36.515134   60422 system_pods.go:89] "kube-scheduler-addons-209049" [e8762fe1-3552-421f-8cd8-0e11d7f817a7] Running
	I1115 09:42:36.515143   60422 system_pods.go:89] "metrics-server-85b7d694d7-sgjrz" [72926a73-00e6-4532-92f6-7db18c63f53f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 09:42:36.515148   60422 system_pods.go:89] "nvidia-device-plugin-daemonset-qtrg4" [2ece1371-7279-4f25-ad2d-270518aadb18] Pending
	I1115 09:42:36.515156   60422 system_pods.go:89] "registry-6b586f9694-fwbg5" [83b0e192-16e8-40e2-9ff5-a5964957dc8a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 09:42:36.515165   60422 system_pods.go:89] "registry-creds-764b6fb674-d6rh5" [e3dfc52f-3ad1-4877-8af4-d96c6b46d6a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 09:42:36.515170   60422 system_pods.go:89] "registry-proxy-xzbqg" [bcc22f7e-dc0a-4976-9707-3acbdc5421f3] Pending
	I1115 09:42:36.515175   60422 system_pods.go:89] "snapshot-controller-7d9fbc56b8-blfn9" [074c2f9d-00a5-4f0d-a757-4cd3d19884a2] Pending
	I1115 09:42:36.515180   60422 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mqtdg" [68c6aac6-c7a8-43e8-aeea-1a1bc2a3ffe7] Pending
	I1115 09:42:36.515186   60422 system_pods.go:89] "storage-provisioner" [5f3882a6-a000-4050-86f4-5d30ce6faeff] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:42:36.515211   60422 retry.go:31] will retry after 292.770926ms: missing components: kube-dns
	I1115 09:42:36.878320   60422 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1115 09:42:36.878351   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:36.881268   60422 system_pods.go:86] 20 kube-system pods found
	I1115 09:42:36.881304   60422 system_pods.go:89] "amd-gpu-device-plugin-zxglt" [2d4f6e85-2a3f-4fc9-85de-7a92b0cc0241] Pending
	I1115 09:42:36.881317   60422 system_pods.go:89] "coredns-66bc5c9577-4xn7s" [b5451c3e-5e31-46fc-aca6-1c24946da89a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 09:42:36.881323   60422 system_pods.go:89] "csi-hostpath-attacher-0" [75eb60d9-894a-4839-a716-6bfb2f15783b] Pending
	I1115 09:42:36.881332   60422 system_pods.go:89] "csi-hostpath-resizer-0" [557b0a30-f93a-4371-bd28-7ce29fe0f42f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1115 09:42:36.881340   60422 system_pods.go:89] "csi-hostpathplugin-n2grt" [1e55d345-4538-4c11-9d5d-27e7a8026753] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1115 09:42:36.881346   60422 system_pods.go:89] "etcd-addons-209049" [dbb94784-4512-4f24-8198-509c45540ddc] Running
	I1115 09:42:36.881352   60422 system_pods.go:89] "kindnet-p4lm7" [99c22a89-4507-418b-bf45-7c29147f179f] Running
	I1115 09:42:36.881358   60422 system_pods.go:89] "kube-apiserver-addons-209049" [db7797f5-d9d4-4fd1-bd3e-ca7cf178a6ed] Running
	I1115 09:42:36.881363   60422 system_pods.go:89] "kube-controller-manager-addons-209049" [1a32ce38-b5b0-481d-980a-42f5e37030cb] Running
	I1115 09:42:36.881373   60422 system_pods.go:89] "kube-ingress-dns-minikube" [48982f43-9ec1-4e9d-a65c-a93e59979a96] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 09:42:36.881378   60422 system_pods.go:89] "kube-proxy-vkr7k" [e066d5c7-375f-467b-af24-507d2a8393bb] Running
	I1115 09:42:36.881384   60422 system_pods.go:89] "kube-scheduler-addons-209049" [e8762fe1-3552-421f-8cd8-0e11d7f817a7] Running
	I1115 09:42:36.881392   60422 system_pods.go:89] "metrics-server-85b7d694d7-sgjrz" [72926a73-00e6-4532-92f6-7db18c63f53f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 09:42:36.881397   60422 system_pods.go:89] "nvidia-device-plugin-daemonset-qtrg4" [2ece1371-7279-4f25-ad2d-270518aadb18] Pending
	I1115 09:42:36.881408   60422 system_pods.go:89] "registry-6b586f9694-fwbg5" [83b0e192-16e8-40e2-9ff5-a5964957dc8a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 09:42:36.881415   60422 system_pods.go:89] "registry-creds-764b6fb674-d6rh5" [e3dfc52f-3ad1-4877-8af4-d96c6b46d6a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 09:42:36.881420   60422 system_pods.go:89] "registry-proxy-xzbqg" [bcc22f7e-dc0a-4976-9707-3acbdc5421f3] Pending
	I1115 09:42:36.881424   60422 system_pods.go:89] "snapshot-controller-7d9fbc56b8-blfn9" [074c2f9d-00a5-4f0d-a757-4cd3d19884a2] Pending
	I1115 09:42:36.881433   60422 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mqtdg" [68c6aac6-c7a8-43e8-aeea-1a1bc2a3ffe7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:42:36.881442   60422 system_pods.go:89] "storage-provisioner" [5f3882a6-a000-4050-86f4-5d30ce6faeff] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:42:36.881468   60422 retry.go:31] will retry after 282.04747ms: missing components: kube-dns
	I1115 09:42:36.885138   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:36.987789   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:36.987880   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:37.180270   60422 system_pods.go:86] 20 kube-system pods found
	I1115 09:42:37.180317   60422 system_pods.go:89] "amd-gpu-device-plugin-zxglt" [2d4f6e85-2a3f-4fc9-85de-7a92b0cc0241] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1115 09:42:37.180329   60422 system_pods.go:89] "coredns-66bc5c9577-4xn7s" [b5451c3e-5e31-46fc-aca6-1c24946da89a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 09:42:37.180340   60422 system_pods.go:89] "csi-hostpath-attacher-0" [75eb60d9-894a-4839-a716-6bfb2f15783b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1115 09:42:37.180349   60422 system_pods.go:89] "csi-hostpath-resizer-0" [557b0a30-f93a-4371-bd28-7ce29fe0f42f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1115 09:42:37.180356   60422 system_pods.go:89] "csi-hostpathplugin-n2grt" [1e55d345-4538-4c11-9d5d-27e7a8026753] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1115 09:42:37.180362   60422 system_pods.go:89] "etcd-addons-209049" [dbb94784-4512-4f24-8198-509c45540ddc] Running
	I1115 09:42:37.180369   60422 system_pods.go:89] "kindnet-p4lm7" [99c22a89-4507-418b-bf45-7c29147f179f] Running
	I1115 09:42:37.180374   60422 system_pods.go:89] "kube-apiserver-addons-209049" [db7797f5-d9d4-4fd1-bd3e-ca7cf178a6ed] Running
	I1115 09:42:37.180379   60422 system_pods.go:89] "kube-controller-manager-addons-209049" [1a32ce38-b5b0-481d-980a-42f5e37030cb] Running
	I1115 09:42:37.180386   60422 system_pods.go:89] "kube-ingress-dns-minikube" [48982f43-9ec1-4e9d-a65c-a93e59979a96] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 09:42:37.180391   60422 system_pods.go:89] "kube-proxy-vkr7k" [e066d5c7-375f-467b-af24-507d2a8393bb] Running
	I1115 09:42:37.180396   60422 system_pods.go:89] "kube-scheduler-addons-209049" [e8762fe1-3552-421f-8cd8-0e11d7f817a7] Running
	I1115 09:42:37.180404   60422 system_pods.go:89] "metrics-server-85b7d694d7-sgjrz" [72926a73-00e6-4532-92f6-7db18c63f53f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 09:42:37.180413   60422 system_pods.go:89] "nvidia-device-plugin-daemonset-qtrg4" [2ece1371-7279-4f25-ad2d-270518aadb18] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1115 09:42:37.180421   60422 system_pods.go:89] "registry-6b586f9694-fwbg5" [83b0e192-16e8-40e2-9ff5-a5964957dc8a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 09:42:37.180429   60422 system_pods.go:89] "registry-creds-764b6fb674-d6rh5" [e3dfc52f-3ad1-4877-8af4-d96c6b46d6a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 09:42:37.180434   60422 system_pods.go:89] "registry-proxy-xzbqg" [bcc22f7e-dc0a-4976-9707-3acbdc5421f3] Pending
	I1115 09:42:37.180442   60422 system_pods.go:89] "snapshot-controller-7d9fbc56b8-blfn9" [074c2f9d-00a5-4f0d-a757-4cd3d19884a2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:42:37.180456   60422 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mqtdg" [68c6aac6-c7a8-43e8-aeea-1a1bc2a3ffe7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:42:37.180465   60422 system_pods.go:89] "storage-provisioner" [5f3882a6-a000-4050-86f4-5d30ce6faeff] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:42:37.180490   60422 retry.go:31] will retry after 336.693004ms: missing components: kube-dns
	I1115 09:42:37.295402   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:37.394987   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:37.495855   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:37.495934   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:37.521803   60422 system_pods.go:86] 20 kube-system pods found
	I1115 09:42:37.521840   60422 system_pods.go:89] "amd-gpu-device-plugin-zxglt" [2d4f6e85-2a3f-4fc9-85de-7a92b0cc0241] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1115 09:42:37.521848   60422 system_pods.go:89] "coredns-66bc5c9577-4xn7s" [b5451c3e-5e31-46fc-aca6-1c24946da89a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 09:42:37.521857   60422 system_pods.go:89] "csi-hostpath-attacher-0" [75eb60d9-894a-4839-a716-6bfb2f15783b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1115 09:42:37.521862   60422 system_pods.go:89] "csi-hostpath-resizer-0" [557b0a30-f93a-4371-bd28-7ce29fe0f42f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1115 09:42:37.521868   60422 system_pods.go:89] "csi-hostpathplugin-n2grt" [1e55d345-4538-4c11-9d5d-27e7a8026753] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1115 09:42:37.521872   60422 system_pods.go:89] "etcd-addons-209049" [dbb94784-4512-4f24-8198-509c45540ddc] Running
	I1115 09:42:37.521876   60422 system_pods.go:89] "kindnet-p4lm7" [99c22a89-4507-418b-bf45-7c29147f179f] Running
	I1115 09:42:37.521880   60422 system_pods.go:89] "kube-apiserver-addons-209049" [db7797f5-d9d4-4fd1-bd3e-ca7cf178a6ed] Running
	I1115 09:42:37.521883   60422 system_pods.go:89] "kube-controller-manager-addons-209049" [1a32ce38-b5b0-481d-980a-42f5e37030cb] Running
	I1115 09:42:37.521890   60422 system_pods.go:89] "kube-ingress-dns-minikube" [48982f43-9ec1-4e9d-a65c-a93e59979a96] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 09:42:37.521896   60422 system_pods.go:89] "kube-proxy-vkr7k" [e066d5c7-375f-467b-af24-507d2a8393bb] Running
	I1115 09:42:37.521899   60422 system_pods.go:89] "kube-scheduler-addons-209049" [e8762fe1-3552-421f-8cd8-0e11d7f817a7] Running
	I1115 09:42:37.521908   60422 system_pods.go:89] "metrics-server-85b7d694d7-sgjrz" [72926a73-00e6-4532-92f6-7db18c63f53f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 09:42:37.521917   60422 system_pods.go:89] "nvidia-device-plugin-daemonset-qtrg4" [2ece1371-7279-4f25-ad2d-270518aadb18] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1115 09:42:37.521922   60422 system_pods.go:89] "registry-6b586f9694-fwbg5" [83b0e192-16e8-40e2-9ff5-a5964957dc8a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 09:42:37.521929   60422 system_pods.go:89] "registry-creds-764b6fb674-d6rh5" [e3dfc52f-3ad1-4877-8af4-d96c6b46d6a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 09:42:37.521934   60422 system_pods.go:89] "registry-proxy-xzbqg" [bcc22f7e-dc0a-4976-9707-3acbdc5421f3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1115 09:42:37.521941   60422 system_pods.go:89] "snapshot-controller-7d9fbc56b8-blfn9" [074c2f9d-00a5-4f0d-a757-4cd3d19884a2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:42:37.521964   60422 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mqtdg" [68c6aac6-c7a8-43e8-aeea-1a1bc2a3ffe7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:42:37.521970   60422 system_pods.go:89] "storage-provisioner" [5f3882a6-a000-4050-86f4-5d30ce6faeff] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:42:37.521989   60422 retry.go:31] will retry after 516.191783ms: missing components: kube-dns
	I1115 09:42:37.793304   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:37.884598   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:37.988291   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:37.988347   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:38.042848   60422 system_pods.go:86] 20 kube-system pods found
	I1115 09:42:38.042887   60422 system_pods.go:89] "amd-gpu-device-plugin-zxglt" [2d4f6e85-2a3f-4fc9-85de-7a92b0cc0241] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1115 09:42:38.042894   60422 system_pods.go:89] "coredns-66bc5c9577-4xn7s" [b5451c3e-5e31-46fc-aca6-1c24946da89a] Running
	I1115 09:42:38.042904   60422 system_pods.go:89] "csi-hostpath-attacher-0" [75eb60d9-894a-4839-a716-6bfb2f15783b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1115 09:42:38.042910   60422 system_pods.go:89] "csi-hostpath-resizer-0" [557b0a30-f93a-4371-bd28-7ce29fe0f42f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1115 09:42:38.042915   60422 system_pods.go:89] "csi-hostpathplugin-n2grt" [1e55d345-4538-4c11-9d5d-27e7a8026753] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1115 09:42:38.042920   60422 system_pods.go:89] "etcd-addons-209049" [dbb94784-4512-4f24-8198-509c45540ddc] Running
	I1115 09:42:38.042924   60422 system_pods.go:89] "kindnet-p4lm7" [99c22a89-4507-418b-bf45-7c29147f179f] Running
	I1115 09:42:38.042928   60422 system_pods.go:89] "kube-apiserver-addons-209049" [db7797f5-d9d4-4fd1-bd3e-ca7cf178a6ed] Running
	I1115 09:42:38.042931   60422 system_pods.go:89] "kube-controller-manager-addons-209049" [1a32ce38-b5b0-481d-980a-42f5e37030cb] Running
	I1115 09:42:38.042940   60422 system_pods.go:89] "kube-ingress-dns-minikube" [48982f43-9ec1-4e9d-a65c-a93e59979a96] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 09:42:38.042944   60422 system_pods.go:89] "kube-proxy-vkr7k" [e066d5c7-375f-467b-af24-507d2a8393bb] Running
	I1115 09:42:38.042950   60422 system_pods.go:89] "kube-scheduler-addons-209049" [e8762fe1-3552-421f-8cd8-0e11d7f817a7] Running
	I1115 09:42:38.042970   60422 system_pods.go:89] "metrics-server-85b7d694d7-sgjrz" [72926a73-00e6-4532-92f6-7db18c63f53f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 09:42:38.042983   60422 system_pods.go:89] "nvidia-device-plugin-daemonset-qtrg4" [2ece1371-7279-4f25-ad2d-270518aadb18] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1115 09:42:38.042994   60422 system_pods.go:89] "registry-6b586f9694-fwbg5" [83b0e192-16e8-40e2-9ff5-a5964957dc8a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 09:42:38.043003   60422 system_pods.go:89] "registry-creds-764b6fb674-d6rh5" [e3dfc52f-3ad1-4877-8af4-d96c6b46d6a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 09:42:38.043008   60422 system_pods.go:89] "registry-proxy-xzbqg" [bcc22f7e-dc0a-4976-9707-3acbdc5421f3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1115 09:42:38.043016   60422 system_pods.go:89] "snapshot-controller-7d9fbc56b8-blfn9" [074c2f9d-00a5-4f0d-a757-4cd3d19884a2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:42:38.043022   60422 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mqtdg" [68c6aac6-c7a8-43e8-aeea-1a1bc2a3ffe7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:42:38.043028   60422 system_pods.go:89] "storage-provisioner" [5f3882a6-a000-4050-86f4-5d30ce6faeff] Running
	I1115 09:42:38.043036   60422 system_pods.go:126] duration metric: took 1.535055494s to wait for k8s-apps to be running ...
	I1115 09:42:38.043047   60422 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 09:42:38.043093   60422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:42:38.056649   60422 system_svc.go:56] duration metric: took 13.592264ms WaitForService to wait for kubelet
	I1115 09:42:38.056675   60422 kubeadm.go:587] duration metric: took 42.666262511s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 09:42:38.056693   60422 node_conditions.go:102] verifying NodePressure condition ...
	I1115 09:42:38.075130   60422 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:42:38.075162   60422 node_conditions.go:123] node cpu capacity is 8
	I1115 09:42:38.075177   60422 node_conditions.go:105] duration metric: took 18.47923ms to run NodePressure ...
	I1115 09:42:38.075189   60422 start.go:242] waiting for startup goroutines ...
	I1115 09:42:38.294373   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:38.385071   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:38.487805   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:38.487980   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:38.793561   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:38.885535   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:38.988250   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:38.988349   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:39.294691   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:39.385652   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:39.488900   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:39.489434   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:39.793025   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:39.886117   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:39.988633   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:39.989018   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:40.296473   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:40.394995   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:40.489027   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:40.489135   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:40.793631   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:40.885854   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:40.988041   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:40.988331   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:41.294567   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:41.386128   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:41.487749   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:41.487791   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:41.794089   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:41.885889   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:41.988352   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:41.988884   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:42.293528   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:42.385693   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:42.488013   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:42.488025   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:42.794413   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:42.885187   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:42.988439   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:42.988570   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:43.294366   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:43.385170   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:43.488571   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:43.488614   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:43.793694   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:43.885633   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:43.987721   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:43.987904   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:44.293695   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:44.394548   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:44.487688   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:44.487742   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:44.794147   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:44.885732   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:44.987784   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:44.988076   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:45.294191   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:45.385028   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:45.488671   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:45.488736   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:45.793056   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:45.885749   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:45.987938   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:45.988029   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:46.293632   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:46.385649   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:46.487668   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:46.487697   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:46.793650   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:46.885333   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:46.987301   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:46.987454   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:47.294091   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:47.385639   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:47.487682   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:47.487754   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:47.792720   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:47.885198   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:47.987892   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:47.987970   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:48.293421   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:48.385232   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:48.488126   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:48.488367   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:48.793225   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:48.884555   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:48.987526   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:48.987729   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:49.293901   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:49.385613   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:49.487408   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:49.487622   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:49.793984   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:49.885446   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:49.987391   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:49.987391   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:50.293941   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:50.385547   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:50.487651   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:50.487684   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:50.793375   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:50.885558   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:50.988153   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:50.988252   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:51.294148   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:51.384742   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:51.487598   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:51.487618   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:51.793074   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:51.885596   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:51.987442   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:51.987481   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:52.294327   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:52.394813   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:52.488118   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:52.488245   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:52.793261   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:52.884780   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:52.987793   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:52.987842   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:53.293574   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:53.385847   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:53.487940   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:53.488056   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:53.793900   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:53.885545   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:53.987595   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:53.987595   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:54.293985   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:54.385861   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:54.487829   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:54.487964   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:54.792576   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:54.885582   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:54.987601   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:54.987643   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:55.293602   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:55.385733   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:55.488258   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:55.488328   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:55.793106   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:55.885791   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:55.987830   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:55.987837   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:56.293753   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:56.385687   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:56.487991   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:56.488030   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:56.793553   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:56.885458   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:56.987208   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:56.987216   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:57.293623   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:57.394131   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:57.487770   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:57.487827   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:57.792733   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:57.885482   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:57.987436   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:57.987634   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:58.294512   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:58.385277   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:58.490293   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:58.490711   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:58.793214   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:58.885603   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:58.987695   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:58.987705   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:59.293631   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:59.385672   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:59.487381   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:59.487427   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:42:59.793883   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:42:59.884725   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:42:59.987478   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:42:59.987496   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:00.293352   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:00.384904   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:00.487780   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:00.487861   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:00.793040   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:00.885829   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:00.987879   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:00.987977   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:01.293333   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:01.394156   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:01.488071   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:01.488108   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:01.793528   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:01.885087   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:01.988418   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:01.988597   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:02.294473   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:02.385200   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:02.488349   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:02.488607   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:02.793589   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:02.885000   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:02.987888   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:02.987930   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:03.292943   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:03.385871   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:03.487947   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:03.487947   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:03.793493   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:03.885421   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:03.987774   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:03.987940   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:04.293322   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:04.385692   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:04.487829   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:04.487826   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:04.792907   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:04.886019   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:04.988009   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:04.988122   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:05.292266   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:05.384507   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:05.488265   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:05.488481   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:05.793233   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:05.884897   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:05.988277   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:05.988395   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:06.300313   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:06.399738   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:06.487787   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:06.488035   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:06.793542   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:06.884794   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:06.987745   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:06.987843   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:07.293542   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:07.384965   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:07.488300   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:07.488399   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:07.793316   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:07.884745   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:07.987515   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:07.987566   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:08.295372   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:08.385272   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:08.489129   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:08.489263   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:08.793765   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:08.894354   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:08.986887   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:08.987050   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:09.293252   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:09.393529   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:09.494353   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:09.494443   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:09.793060   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:09.885460   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:09.987312   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:09.987444   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:10.293882   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:10.385616   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:10.487752   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:10.487855   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:10.793923   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:10.885731   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:10.988202   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:10.988474   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:11.294352   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:11.385208   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:11.488440   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:11.488493   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:11.793662   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:11.885880   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:11.988073   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:11.988173   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:12.293934   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:12.385583   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:12.487878   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:12.487993   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:12.793467   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:12.885252   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:12.988513   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:12.988568   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:13.293929   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:13.386147   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:13.489901   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:13.490360   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:13.793572   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:13.893814   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:13.987716   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:13.987753   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:14.293551   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:14.394144   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:14.487794   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:14.487890   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:14.792670   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:14.885275   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:14.986807   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:14.986870   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:15.293482   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:15.384924   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:15.488343   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:15.488385   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:15.793704   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:15.885294   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:15.988381   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:15.988660   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:16.293171   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:16.384874   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:16.487854   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:16.487936   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:16.792723   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:16.885321   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:16.987130   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:16.987196   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:17.293529   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:17.385107   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:17.488103   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:17.488148   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:17.793147   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:17.885747   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:17.988772   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:17.989480   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:18.295788   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:18.385510   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:18.487926   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:18.488120   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:18.793246   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:18.885511   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:18.987996   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:18.988203   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:19.294248   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:19.385181   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:19.487471   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:19.487471   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:19.794495   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:19.894936   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:19.995315   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:19.995465   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:20.293827   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:20.385915   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:20.488546   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:20.488710   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:20.794303   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:20.885339   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:20.988705   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:20.988929   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:21.294150   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:21.384905   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:21.487934   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:21.487976   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:21.793242   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:21.884710   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:21.987756   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:21.987836   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:22.293617   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:22.385881   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:22.487723   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:22.487860   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:22.793729   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:22.885180   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:22.988102   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:22.988106   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:23.293178   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:23.384755   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:23.487831   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:23.487870   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:23.793651   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:23.884946   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:23.987971   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:23.987994   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:24.292984   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:24.385622   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:24.487637   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:24.487785   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:24.792754   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:24.885319   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:24.987238   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:24.987273   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:25.293631   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:25.393766   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:25.487788   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:25.487853   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:25.793244   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:25.884801   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:25.987557   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:25.987730   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:26.292795   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:26.385226   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:26.487997   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:26.488085   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:26.793016   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:26.886161   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:26.988310   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:26.988343   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:27.293573   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:27.385251   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:27.488147   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:27.488257   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:27.794124   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:27.894580   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:27.995015   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:43:27.995083   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:28.293663   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:28.385311   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:28.487680   60422 kapi.go:107] duration metric: took 1m27.50342045s to wait for kubernetes.io/minikube-addons=registry ...
	I1115 09:43:28.487685   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:28.793448   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:28.884925   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:28.987878   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:29.293066   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:29.385560   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:29.487700   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:29.794669   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:29.885323   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:29.989068   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:30.293438   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:30.387225   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:30.489547   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:30.793729   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:30.885670   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:30.988447   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:31.294405   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:31.385482   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:31.487327   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:31.793624   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:31.885399   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:31.987988   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:32.293263   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:32.384810   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:32.488424   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:32.793870   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:32.886562   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:32.989805   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:33.297249   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:33.385554   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:33.488969   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:33.794259   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:33.886486   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:33.989988   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:34.293605   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:34.385299   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:34.490667   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:34.793835   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:34.885995   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:34.987849   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:35.293768   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:35.387382   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:35.488276   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:35.793937   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:35.886585   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:35.987923   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:36.294704   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:36.385460   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:36.487771   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:36.793872   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:36.885760   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:36.988073   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:37.293337   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:37.384825   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:37.488103   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:37.793749   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:37.885150   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:37.988704   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:38.294446   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:38.385525   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:38.487630   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:38.794231   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:38.884986   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:38.988076   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:39.293211   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:39.384702   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:39.487747   60422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:43:39.793298   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:39.884921   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:39.987917   60422 kapi.go:107] duration metric: took 1m39.00367176s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1115 09:43:40.293038   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:40.386016   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:40.793083   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:40.885746   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:41.293538   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:41.385421   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:41.794155   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:41.884513   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:42.292803   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:42.385547   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:42.793367   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:42.885007   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:43.293198   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:43.385152   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:43.794544   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:43.885456   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:44.294346   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:44.385328   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:44.794000   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:44.886145   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:45.293815   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:45.394059   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:43:45.793799   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:45.885636   60422 kapi.go:107] duration metric: took 1m41.003723506s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1115 09:43:45.888085   60422 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-209049 cluster.
	I1115 09:43:45.889451   60422 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1115 09:43:45.890919   60422 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1115 09:43:46.293145   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:46.793125   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:47.293079   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:47.793413   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:48.294001   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:48.793128   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:49.294916   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:49.794587   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:50.294900   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:50.793295   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:51.294272   60422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:43:51.793727   60422 kapi.go:107] duration metric: took 1m50.003941095s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1115 09:43:51.795692   60422 out.go:179] * Enabled addons: cloud-spanner, default-storageclass, storage-provisioner-rancher, inspektor-gadget, ingress-dns, registry-creds, storage-provisioner, amd-gpu-device-plugin, nvidia-device-plugin, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1115 09:43:51.797243   60422 addons.go:515] duration metric: took 1m56.406809677s for enable addons: enabled=[cloud-spanner default-storageclass storage-provisioner-rancher inspektor-gadget ingress-dns registry-creds storage-provisioner amd-gpu-device-plugin nvidia-device-plugin metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1115 09:43:51.797295   60422 start.go:247] waiting for cluster config update ...
	I1115 09:43:51.797317   60422 start.go:256] writing updated cluster config ...
	I1115 09:43:51.797597   60422 ssh_runner.go:195] Run: rm -f paused
	I1115 09:43:51.802297   60422 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 09:43:51.805457   60422 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4xn7s" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:43:51.809596   60422 pod_ready.go:94] pod "coredns-66bc5c9577-4xn7s" is "Ready"
	I1115 09:43:51.809619   60422 pod_ready.go:86] duration metric: took 4.139466ms for pod "coredns-66bc5c9577-4xn7s" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:43:51.894024   60422 pod_ready.go:83] waiting for pod "etcd-addons-209049" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:43:51.898633   60422 pod_ready.go:94] pod "etcd-addons-209049" is "Ready"
	I1115 09:43:51.898657   60422 pod_ready.go:86] duration metric: took 4.607036ms for pod "etcd-addons-209049" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:43:51.900682   60422 pod_ready.go:83] waiting for pod "kube-apiserver-addons-209049" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:43:51.904666   60422 pod_ready.go:94] pod "kube-apiserver-addons-209049" is "Ready"
	I1115 09:43:51.904688   60422 pod_ready.go:86] duration metric: took 3.981533ms for pod "kube-apiserver-addons-209049" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:43:51.906510   60422 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-209049" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:43:52.207192   60422 pod_ready.go:94] pod "kube-controller-manager-addons-209049" is "Ready"
	I1115 09:43:52.207229   60422 pod_ready.go:86] duration metric: took 300.696386ms for pod "kube-controller-manager-addons-209049" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:43:52.405894   60422 pod_ready.go:83] waiting for pod "kube-proxy-vkr7k" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:43:52.806067   60422 pod_ready.go:94] pod "kube-proxy-vkr7k" is "Ready"
	I1115 09:43:52.806094   60422 pod_ready.go:86] duration metric: took 400.174126ms for pod "kube-proxy-vkr7k" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:43:53.006153   60422 pod_ready.go:83] waiting for pod "kube-scheduler-addons-209049" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:43:53.406054   60422 pod_ready.go:94] pod "kube-scheduler-addons-209049" is "Ready"
	I1115 09:43:53.406080   60422 pod_ready.go:86] duration metric: took 399.900012ms for pod "kube-scheduler-addons-209049" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:43:53.406092   60422 pod_ready.go:40] duration metric: took 1.603760852s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 09:43:53.453659   60422 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 09:43:53.456563   60422 out.go:179] * Done! kubectl is now configured to use "addons-209049" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 15 09:43:54 addons-209049 crio[898]: time="2025-11-15T09:43:54.313787256Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=769c912d-dfed-4c54-877e-d972592f9602 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:43:54 addons-209049 crio[898]: time="2025-11-15T09:43:54.314400637Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=da714fda-e6b0-485a-86a9-2346e9f4af05 name=/runtime.v1.ImageService/PullImage
	Nov 15 09:43:54 addons-209049 crio[898]: time="2025-11-15T09:43:54.315828581Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 15 09:43:58 addons-209049 crio[898]: time="2025-11-15T09:43:58.67229072Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=da714fda-e6b0-485a-86a9-2346e9f4af05 name=/runtime.v1.ImageService/PullImage
	Nov 15 09:43:58 addons-209049 crio[898]: time="2025-11-15T09:43:58.672993402Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c277ad48-d910-4b59-b543-265ee66fc054 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:43:58 addons-209049 crio[898]: time="2025-11-15T09:43:58.674485845Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f8e9e329-85de-4dd9-a842-1fe37cea7601 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:43:58 addons-209049 crio[898]: time="2025-11-15T09:43:58.678203976Z" level=info msg="Creating container: default/busybox/busybox" id=6470627c-916d-4ccb-a34c-a73de86450bc name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 09:43:58 addons-209049 crio[898]: time="2025-11-15T09:43:58.678352253Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:43:58 addons-209049 crio[898]: time="2025-11-15T09:43:58.683692586Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:43:58 addons-209049 crio[898]: time="2025-11-15T09:43:58.684311365Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:43:58 addons-209049 crio[898]: time="2025-11-15T09:43:58.70263483Z" level=info msg="Created container ff67f0e87fc96230401489f9a89f3f5146c7a5459814923211d54c5c6f87b3e1: default/busybox/busybox" id=6470627c-916d-4ccb-a34c-a73de86450bc name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 09:43:58 addons-209049 crio[898]: time="2025-11-15T09:43:58.703266471Z" level=info msg="Starting container: ff67f0e87fc96230401489f9a89f3f5146c7a5459814923211d54c5c6f87b3e1" id=85ba6ebf-062f-4941-b7cc-7bdfdd71df47 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 09:43:58 addons-209049 crio[898]: time="2025-11-15T09:43:58.705070878Z" level=info msg="Started container" PID=6522 containerID=ff67f0e87fc96230401489f9a89f3f5146c7a5459814923211d54c5c6f87b3e1 description=default/busybox/busybox id=85ba6ebf-062f-4941-b7cc-7bdfdd71df47 name=/runtime.v1.RuntimeService/StartContainer sandboxID=78000eed370273ebbb25e18705d33434cd226df49c1a2796f3c8a37248caa533
	Nov 15 09:44:07 addons-209049 crio[898]: time="2025-11-15T09:44:07.16056257Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-create-pvc-8bd24f7f-8ecb-40a6-a860-d94dacd37833/POD" id=0793c45f-d15f-4663-8d4b-cee9b237f464 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 09:44:07 addons-209049 crio[898]: time="2025-11-15T09:44:07.160685449Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:44:07 addons-209049 crio[898]: time="2025-11-15T09:44:07.167121811Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-8bd24f7f-8ecb-40a6-a860-d94dacd37833 Namespace:local-path-storage ID:1dc104aa61d84a6840601cef1a8dfd3286c5bc0e625055309d80b25ae97ac8de UID:c660b281-cc4d-4a64-8e9e-f033b9a60fe5 NetNS:/var/run/netns/a7a3db42-211a-450e-b91e-35121baf051c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0004cc690}] Aliases:map[]}"
	Nov 15 09:44:07 addons-209049 crio[898]: time="2025-11-15T09:44:07.167156254Z" level=info msg="Adding pod local-path-storage_helper-pod-create-pvc-8bd24f7f-8ecb-40a6-a860-d94dacd37833 to CNI network \"kindnet\" (type=ptp)"
	Nov 15 09:44:07 addons-209049 crio[898]: time="2025-11-15T09:44:07.177488294Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-8bd24f7f-8ecb-40a6-a860-d94dacd37833 Namespace:local-path-storage ID:1dc104aa61d84a6840601cef1a8dfd3286c5bc0e625055309d80b25ae97ac8de UID:c660b281-cc4d-4a64-8e9e-f033b9a60fe5 NetNS:/var/run/netns/a7a3db42-211a-450e-b91e-35121baf051c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0004cc690}] Aliases:map[]}"
	Nov 15 09:44:07 addons-209049 crio[898]: time="2025-11-15T09:44:07.177614775Z" level=info msg="Checking pod local-path-storage_helper-pod-create-pvc-8bd24f7f-8ecb-40a6-a860-d94dacd37833 for CNI network kindnet (type=ptp)"
	Nov 15 09:44:07 addons-209049 crio[898]: time="2025-11-15T09:44:07.180135799Z" level=info msg="Ran pod sandbox 1dc104aa61d84a6840601cef1a8dfd3286c5bc0e625055309d80b25ae97ac8de with infra container: local-path-storage/helper-pod-create-pvc-8bd24f7f-8ecb-40a6-a860-d94dacd37833/POD" id=0793c45f-d15f-4663-8d4b-cee9b237f464 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 09:44:07 addons-209049 crio[898]: time="2025-11-15T09:44:07.18139468Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=67f5b9db-51bf-4503-b464-413ece5c4be6 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:44:07 addons-209049 crio[898]: time="2025-11-15T09:44:07.181613769Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=67f5b9db-51bf-4503-b464-413ece5c4be6 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:44:07 addons-209049 crio[898]: time="2025-11-15T09:44:07.181672122Z" level=info msg="Neither image nor artfiact docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 found" id=67f5b9db-51bf-4503-b464-413ece5c4be6 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:44:07 addons-209049 crio[898]: time="2025-11-15T09:44:07.182262173Z" level=info msg="Pulling image: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=ef86d5cb-26a5-43c8-b192-a71795cf0a79 name=/runtime.v1.ImageService/PullImage
	Nov 15 09:44:07 addons-209049 crio[898]: time="2025-11-15T09:44:07.185854589Z" level=info msg="Trying to access \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	ff67f0e87fc96       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          9 seconds ago        Running             busybox                                  0                   78000eed37027       busybox                                    default
	3caa65a862513       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          17 seconds ago       Running             csi-snapshotter                          0                   a6780b2ebdf30       csi-hostpathplugin-n2grt                   kube-system
	b0ff95e639d4d       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          18 seconds ago       Running             csi-provisioner                          0                   a6780b2ebdf30       csi-hostpathplugin-n2grt                   kube-system
	c9bdb51e12a14       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            19 seconds ago       Running             liveness-probe                           0                   a6780b2ebdf30       csi-hostpathplugin-n2grt                   kube-system
	e7dcd097399e8       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           20 seconds ago       Running             hostpath                                 0                   a6780b2ebdf30       csi-hostpathplugin-n2grt                   kube-system
	acf4ca35c9f55       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                21 seconds ago       Running             node-driver-registrar                    0                   a6780b2ebdf30       csi-hostpathplugin-n2grt                   kube-system
	57de45fd8698b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 22 seconds ago       Running             gcp-auth                                 0                   7c64127183e0a       gcp-auth-78565c9fb4-jr55m                  gcp-auth
	e64bd6dad227f       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             28 seconds ago       Running             controller                               0                   573aefb125022       ingress-nginx-controller-6c8bf45fb-j4f8b   ingress-nginx
	612149871aec4       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            35 seconds ago       Running             gadget                                   0                   f183d6a201637       gadget-cbnnb                               gadget
	171c9fa6da7f8       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              40 seconds ago       Running             registry-proxy                           0                   fad0e1b51b35b       registry-proxy-xzbqg                       kube-system
	ff456e58c6b53       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   46 seconds ago       Running             csi-external-health-monitor-controller   0                   a6780b2ebdf30       csi-hostpathplugin-n2grt                   kube-system
	ee4946da5ae0d       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     48 seconds ago       Running             nvidia-device-plugin-ctr                 0                   f974b5d6bdb65       nvidia-device-plugin-daemonset-qtrg4       kube-system
	103636dd26caf       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      53 seconds ago       Running             volume-snapshot-controller               0                   a3d7d91646be5       snapshot-controller-7d9fbc56b8-blfn9       kube-system
	bc5a7bfdb2232       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              54 seconds ago       Running             yakd                                     0                   d779fbd48f554       yakd-dashboard-5ff678cb9-5kfrb             yakd-dashboard
	49680c8d74f4f       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     59 seconds ago       Running             amd-gpu-device-plugin                    0                   0bd21297d22f1       amd-gpu-device-plugin-zxglt                kube-system
	b667435537d1e       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             About a minute ago   Running             csi-attacher                             0                   a58d0e96ae759       csi-hostpath-attacher-0                    kube-system
	0eebfaee45b60       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   693d7abd0a21d       snapshot-controller-7d9fbc56b8-mqtdg       kube-system
	2e0b28ec2dfa3       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              About a minute ago   Running             csi-resizer                              0                   cd70846b3479b       csi-hostpath-resizer-0                     kube-system
	ee41b9a2e021c       884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45                                                                             About a minute ago   Exited              patch                                    1                   ca5b8a192d36e       ingress-nginx-admission-patch-d5h7k        ingress-nginx
	25f3037224afb       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   About a minute ago   Exited              create                                   0                   7928bb55702f4       ingress-nginx-admission-create-fxrnb       ingress-nginx
	5c40e88663ddd       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             About a minute ago   Running             local-path-provisioner                   0                   0c519e741bb0d       local-path-provisioner-648f6765c9-6trtr    local-path-storage
	4c734c004dda0       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           About a minute ago   Running             registry                                 0                   ae6edd7f24add       registry-6b586f9694-fwbg5                  kube-system
	db6577073b2a6       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        About a minute ago   Running             metrics-server                           0                   4e3144467a734       metrics-server-85b7d694d7-sgjrz            kube-system
	da7a6d3454ebc       gcr.io/cloud-spanner-emulator/emulator@sha256:7360f5c5ff4b89d75592d9585fc2d59d207b08ccf262a84edfe79ee0613a7099                               About a minute ago   Running             cloud-spanner-emulator                   0                   a1765058a99f4       cloud-spanner-emulator-6f9fcf858b-7z68m    default
	ed655dc7f306b       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               About a minute ago   Running             minikube-ingress-dns                     0                   a36ea2077b2f5       kube-ingress-dns-minikube                  kube-system
	abed161df6f25       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             About a minute ago   Running             coredns                                  0                   0e6f20b4c3ced       coredns-66bc5c9577-4xn7s                   kube-system
	1bdef9117bea1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   6d65d688f91c8       storage-provisioner                        kube-system
	c47933319f25e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             2 minutes ago        Running             kindnet-cni                              0                   69744a9050166       kindnet-p4lm7                              kube-system
	bf4bdcddcb90f       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             2 minutes ago        Running             kube-proxy                               0                   547bdcb69a56e       kube-proxy-vkr7k                           kube-system
	2d0c6bfa456fd       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             2 minutes ago        Running             kube-controller-manager                  0                   211afdbd9e242       kube-controller-manager-addons-209049      kube-system
	fc273347a0fa0       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             2 minutes ago        Running             kube-scheduler                           0                   22b016d5b1cb5       kube-scheduler-addons-209049               kube-system
	8f354571302cf       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             2 minutes ago        Running             kube-apiserver                           0                   9b97753985d92       kube-apiserver-addons-209049               kube-system
	1f26d41b1ae72       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             2 minutes ago        Running             etcd                                     0                   eaeefa8b1d51c       etcd-addons-209049                         kube-system
	
	
	==> coredns [abed161df6f25739be955e904c69c8cc237aee9607441efb5b85c298a8066bc3] <==
	[INFO] 10.244.0.18:35574 - 26303 "AAAA IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.002768016s
	[INFO] 10.244.0.18:42590 - 35639 "AAAA IN registry.kube-system.svc.cluster.local.southamerica-west1-a.c.k8s-minikube.internal. udp 101 false 512" NXDOMAIN qr,aa,rd,ra 218 0.000086913s
	[INFO] 10.244.0.18:42590 - 35342 "A IN registry.kube-system.svc.cluster.local.southamerica-west1-a.c.k8s-minikube.internal. udp 101 false 512" NXDOMAIN qr,aa,rd,ra 218 0.000123854s
	[INFO] 10.244.0.18:49257 - 63577 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000079512s
	[INFO] 10.244.0.18:49257 - 63307 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000095098s
	[INFO] 10.244.0.18:35566 - 43984 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000069912s
	[INFO] 10.244.0.18:35566 - 43520 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000112604s
	[INFO] 10.244.0.18:51832 - 31045 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000100733s
	[INFO] 10.244.0.18:51832 - 30895 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000127756s
	[INFO] 10.244.0.22:60455 - 40754 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000269832s
	[INFO] 10.244.0.22:40606 - 30297 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000331466s
	[INFO] 10.244.0.22:53545 - 64332 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000138521s
	[INFO] 10.244.0.22:60431 - 20824 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00018559s
	[INFO] 10.244.0.22:54538 - 24553 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000124346s
	[INFO] 10.244.0.22:59868 - 53418 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000158503s
	[INFO] 10.244.0.22:54353 - 45835 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.014406793s
	[INFO] 10.244.0.22:54398 - 48441 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.014549874s
	[INFO] 10.244.0.22:43769 - 19536 "A IN storage.googleapis.com.southamerica-west1-a.c.k8s-minikube.internal. udp 96 false 1232" NXDOMAIN qr,rd,ra 202 0.006176872s
	[INFO] 10.244.0.22:41370 - 44962 "AAAA IN storage.googleapis.com.southamerica-west1-a.c.k8s-minikube.internal. udp 96 false 1232" NXDOMAIN qr,rd,ra 202 0.006450045s
	[INFO] 10.244.0.22:57119 - 57641 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005058705s
	[INFO] 10.244.0.22:38996 - 60399 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005404149s
	[INFO] 10.244.0.22:34542 - 58722 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004315687s
	[INFO] 10.244.0.22:55544 - 49007 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005427596s
	[INFO] 10.244.0.22:34734 - 43458 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00093051s
	[INFO] 10.244.0.22:36932 - 20069 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 192 0.001197468s
	
	
	==> describe nodes <==
	Name:               addons-209049
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-209049
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=addons-209049
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T09_41_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-209049
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-209049"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 09:41:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-209049
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 09:44:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 09:43:51 +0000   Sat, 15 Nov 2025 09:41:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 09:43:51 +0000   Sat, 15 Nov 2025 09:41:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 09:43:51 +0000   Sat, 15 Nov 2025 09:41:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 09:43:51 +0000   Sat, 15 Nov 2025 09:42:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-209049
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                1008282d-3b27-4e3f-97ca-d7ea63ae3248
	  Boot ID:                    6a33f504-3a8c-4d49-8fc5-301cf5275efc
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	  default                     cloud-spanner-emulator-6f9fcf858b-7z68m                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  gadget                      gadget-cbnnb                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m8s
	  gcp-auth                    gcp-auth-78565c9fb4-jr55m                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-j4f8b                      100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         2m8s
	  kube-system                 amd-gpu-device-plugin-zxglt                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-4xn7s                                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m13s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 csi-hostpathplugin-n2grt                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 etcd-addons-209049                                            100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m20s
	  kube-system                 kindnet-p4lm7                                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m14s
	  kube-system                 kube-apiserver-addons-209049                                  250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-controller-manager-addons-209049                         200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-proxy-vkr7k                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 kube-scheduler-addons-209049                                  100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 metrics-server-85b7d694d7-sgjrz                               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         2m9s
	  kube-system                 nvidia-device-plugin-daemonset-qtrg4                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 registry-6b586f9694-fwbg5                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 registry-creds-764b6fb674-d6rh5                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 registry-proxy-xzbqg                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 snapshot-controller-7d9fbc56b8-blfn9                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 snapshot-controller-7d9fbc56b8-mqtdg                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	  local-path-storage          helper-pod-create-pvc-8bd24f7f-8ecb-40a6-a860-d94dacd37833    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  local-path-storage          local-path-provisioner-648f6765c9-6trtr                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m8s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-5kfrb                                0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     2m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 2m12s  kube-proxy       
	  Normal   Starting                 2m20s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m20s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m20s  kubelet          Node addons-209049 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m20s  kubelet          Node addons-209049 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m20s  kubelet          Node addons-209049 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m14s  node-controller  Node addons-209049 event: Registered Node addons-209049 in Controller
	  Normal   NodeReady                92s    kubelet          Node addons-209049 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov15 08:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001689] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.389824] i8042: Warning: Keylock active
	[  +0.010803] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.483517] block sda: the capability attribute has been deprecated.
	[  +0.080513] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023932] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.604079] kauditd_printk_skb: 47 callbacks suppressed
	[Nov15 09:41] kmem.limit_in_bytes is deprecated and will be removed. Writing any value to this file has no effect. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [1f26d41b1ae72e1dc34f54376e004475d2c4c414c8b70169e5095a83b0a7212d] <==
	{"level":"warn","ts":"2025-11-15T09:41:45.593118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:41:45.598914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:41:45.605940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:41:45.618081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:41:45.624073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:41:45.630360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:41:45.636846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:41:45.643035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:41:45.678665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:41:45.685926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:41:45.692061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:41:45.701326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:41:45.709065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:41:45.714890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:41:45.720577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:41:45.732021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:41:45.779036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:41:45.785103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:41:45.824904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:42:02.181001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:42:02.187459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:42:24.019030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:42:24.025323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:42:24.099418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:42:24.105507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58872","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [57de45fd8698b773c19fc5a8e0495dd8b9e1a7e9a44b7071058464556ee4af16] <==
	2025/11/15 09:43:45 GCP Auth Webhook started!
	2025/11/15 09:43:53 Ready to marshal response ...
	2025/11/15 09:43:53 Ready to write response ...
	2025/11/15 09:43:53 Ready to marshal response ...
	2025/11/15 09:43:53 Ready to write response ...
	2025/11/15 09:43:54 Ready to marshal response ...
	2025/11/15 09:43:54 Ready to write response ...
	2025/11/15 09:44:06 Ready to marshal response ...
	2025/11/15 09:44:06 Ready to write response ...
	2025/11/15 09:44:06 Ready to marshal response ...
	2025/11/15 09:44:06 Ready to write response ...
	
	
	==> kernel <==
	 09:44:08 up  1:26,  0 user,  load average: 0.45, 1.54, 1.92
	Linux addons-209049 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c47933319f25ea439a895d8a214194c80c6c0e5be436f0cda95e520fb61fcc42] <==
	E1115 09:42:25.799183       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1115 09:42:25.801568       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1115 09:42:27.198282       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 09:42:27.198309       1 metrics.go:72] Registering metrics
	I1115 09:42:27.198357       1 controller.go:711] "Syncing nftables rules"
	I1115 09:42:35.791237       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:42:35.791301       1 main.go:301] handling current node
	I1115 09:42:45.787443       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:42:45.787529       1 main.go:301] handling current node
	I1115 09:42:55.790050       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:42:55.790091       1 main.go:301] handling current node
	I1115 09:43:05.787991       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:43:05.788047       1 main.go:301] handling current node
	I1115 09:43:15.787652       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:43:15.787696       1 main.go:301] handling current node
	I1115 09:43:25.787548       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:43:25.787581       1 main.go:301] handling current node
	I1115 09:43:35.787803       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:43:35.787928       1 main.go:301] handling current node
	I1115 09:43:45.788330       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:43:45.788366       1 main.go:301] handling current node
	I1115 09:43:55.787287       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:43:55.787325       1 main.go:301] handling current node
	I1115 09:44:05.789467       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:44:05.789511       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8f354571302cfa24ed2fcb9dc1a59908900201f2ee8a9f0985e0cd698988ceff] <==
	W1115 09:42:24.025304       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1115 09:42:24.099344       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1115 09:42:24.105453       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1115 09:42:36.302400       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.172.4:443: connect: connection refused
	E1115 09:42:36.303101       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.172.4:443: connect: connection refused" logger="UnhandledError"
	W1115 09:42:36.302628       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.172.4:443: connect: connection refused
	E1115 09:42:36.303674       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.172.4:443: connect: connection refused" logger="UnhandledError"
	W1115 09:42:36.321427       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.172.4:443: connect: connection refused
	E1115 09:42:36.321459       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.172.4:443: connect: connection refused" logger="UnhandledError"
	W1115 09:42:36.378083       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.172.4:443: connect: connection refused
	E1115 09:42:36.378128       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.172.4:443: connect: connection refused" logger="UnhandledError"
	W1115 09:42:58.159273       1 handler_proxy.go:99] no RequestInfo found in the context
	E1115 09:42:58.159317       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.21.27:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.21.27:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.21.27:443: connect: connection refused" logger="UnhandledError"
	E1115 09:42:58.159367       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1115 09:42:58.159655       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.21.27:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.21.27:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.21.27:443: connect: connection refused" logger="UnhandledError"
	E1115 09:42:58.165626       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.21.27:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.21.27:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.21.27:443: connect: connection refused" logger="UnhandledError"
	E1115 09:42:58.187097       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.21.27:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.21.27:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.21.27:443: connect: connection refused" logger="UnhandledError"
	E1115 09:42:58.229013       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.21.27:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.21.27:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.21.27:443: connect: connection refused" logger="UnhandledError"
	E1115 09:42:58.310113       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.21.27:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.21.27:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.21.27:443: connect: connection refused" logger="UnhandledError"
	I1115 09:42:58.578506       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1115 09:44:06.175037       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47426: use of closed network connection
	E1115 09:44:06.330804       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47456: use of closed network connection
	
	
	==> kube-controller-manager [2d0c6bfa456fdc1fae3dbb4e08aebd592ec5b3e4b99f82367927552be2506b4b] <==
	I1115 09:41:54.053221       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1115 09:41:54.054244       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1115 09:41:54.054250       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1115 09:41:54.054303       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 09:41:54.054315       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1115 09:41:54.054338       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 09:41:54.054506       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1115 09:41:54.056803       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1115 09:41:54.056874       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1115 09:41:54.057803       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1115 09:41:54.057937       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1115 09:41:54.059798       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 09:41:54.075157       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1115 09:41:54.082765       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1115 09:41:59.896363       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1115 09:42:24.013369       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1115 09:42:24.013551       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1115 09:42:24.013616       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1115 09:42:24.090012       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1115 09:42:24.093590       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1115 09:42:24.114727       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 09:42:24.194507       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 09:42:39.079403       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1115 09:42:54.119826       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1115 09:42:54.203113       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [bf4bdcddcb90f809aee08bb750d5e3282b1083556ad258032140c881f019a634] <==
	I1115 09:41:55.312438       1 server_linux.go:53] "Using iptables proxy"
	I1115 09:41:55.393906       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 09:41:55.496194       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 09:41:55.497244       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1115 09:41:55.497352       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 09:41:55.879978       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 09:41:55.880057       1 server_linux.go:132] "Using iptables Proxier"
	I1115 09:41:55.979305       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 09:41:55.985531       1 server.go:527] "Version info" version="v1.34.1"
	I1115 09:41:55.986131       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 09:41:55.994443       1 config.go:200] "Starting service config controller"
	I1115 09:41:55.994464       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 09:41:55.994489       1 config.go:106] "Starting endpoint slice config controller"
	I1115 09:41:55.994495       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 09:41:55.994508       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 09:41:55.994513       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 09:41:55.995440       1 config.go:309] "Starting node config controller"
	I1115 09:41:55.995450       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 09:41:55.995458       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 09:41:56.094998       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 09:41:56.095051       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 09:41:56.095084       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [fc273347a0fa0ac4a76f1735e0dc29501058daa141499c42e0703edf64d3cc86] <==
	E1115 09:41:46.694922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 09:41:46.694982       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 09:41:46.695580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 09:41:46.695750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 09:41:46.696458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 09:41:46.696589       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 09:41:46.696645       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 09:41:46.696758       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 09:41:46.697241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 09:41:46.697311       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 09:41:46.697381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 09:41:46.697397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 09:41:46.697424       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 09:41:46.697475       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 09:41:46.697553       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 09:41:46.697596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 09:41:46.697600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 09:41:46.697578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 09:41:47.642001       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 09:41:47.664106       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 09:41:47.675973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 09:41:47.699698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 09:41:47.719741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 09:41:47.756848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1115 09:41:48.092551       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 09:43:28 addons-209049 kubelet[1421]: I1115 09:43:28.424590    1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-xzbqg" podStartSLOduration=1.965999133 podStartE2EDuration="52.424572994s" podCreationTimestamp="2025-11-15 09:42:36 +0000 UTC" firstStartedPulling="2025-11-15 09:42:37.404935981 +0000 UTC m=+48.767792157" lastFinishedPulling="2025-11-15 09:43:27.863509853 +0000 UTC m=+99.226366018" observedRunningTime="2025-11-15 09:43:28.423609549 +0000 UTC m=+99.786465731" watchObservedRunningTime="2025-11-15 09:43:28.424572994 +0000 UTC m=+99.787429176"
	Nov 15 09:43:29 addons-209049 kubelet[1421]: I1115 09:43:29.418082    1421 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-xzbqg" secret="" err="secret \"gcp-auth\" not found"
	Nov 15 09:43:33 addons-209049 kubelet[1421]: I1115 09:43:33.587819    1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-cbnnb" podStartSLOduration=66.16516569 podStartE2EDuration="1m33.587782156s" podCreationTimestamp="2025-11-15 09:42:00 +0000 UTC" firstStartedPulling="2025-11-15 09:43:05.152415868 +0000 UTC m=+76.515272031" lastFinishedPulling="2025-11-15 09:43:32.575032332 +0000 UTC m=+103.937888497" observedRunningTime="2025-11-15 09:43:33.587263508 +0000 UTC m=+104.950119691" watchObservedRunningTime="2025-11-15 09:43:33.587782156 +0000 UTC m=+104.950638378"
	Nov 15 09:43:39 addons-209049 kubelet[1421]: I1115 09:43:39.529909    1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-6c8bf45fb-j4f8b" podStartSLOduration=69.167601348 podStartE2EDuration="1m39.529885211s" podCreationTimestamp="2025-11-15 09:42:00 +0000 UTC" firstStartedPulling="2025-11-15 09:43:08.789214513 +0000 UTC m=+80.152070688" lastFinishedPulling="2025-11-15 09:43:39.151498388 +0000 UTC m=+110.514354551" observedRunningTime="2025-11-15 09:43:39.528869891 +0000 UTC m=+110.891726072" watchObservedRunningTime="2025-11-15 09:43:39.529885211 +0000 UTC m=+110.892741398"
	Nov 15 09:43:40 addons-209049 kubelet[1421]: E1115 09:43:40.336861    1421 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Nov 15 09:43:40 addons-209049 kubelet[1421]: E1115 09:43:40.336947    1421 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3dfc52f-3ad1-4877-8af4-d96c6b46d6a2-gcr-creds podName:e3dfc52f-3ad1-4877-8af4-d96c6b46d6a2 nodeName:}" failed. No retries permitted until 2025-11-15 09:44:44.336932429 +0000 UTC m=+175.699788590 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/e3dfc52f-3ad1-4877-8af4-d96c6b46d6a2-gcr-creds") pod "registry-creds-764b6fb674-d6rh5" (UID: "e3dfc52f-3ad1-4877-8af4-d96c6b46d6a2") : secret "registry-creds-gcr" not found
	Nov 15 09:43:40 addons-209049 kubelet[1421]: I1115 09:43:40.777945    1421 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70c8c95b-2054-4501-8721-e9e86024dc9e" path="/var/lib/kubelet/pods/70c8c95b-2054-4501-8721-e9e86024dc9e/volumes"
	Nov 15 09:43:40 addons-209049 kubelet[1421]: I1115 09:43:40.778513    1421 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f552999e-f6de-484d-8fd6-0f9c933dd7d5" path="/var/lib/kubelet/pods/f552999e-f6de-484d-8fd6-0f9c933dd7d5/volumes"
	Nov 15 09:43:48 addons-209049 kubelet[1421]: I1115 09:43:48.706631    1421 scope.go:117] "RemoveContainer" containerID="379536dc4ef9bb1b2ffdb3b9f1a1d8b83d4a2ccc687ca0e2e341d157e0811eb6"
	Nov 15 09:43:48 addons-209049 kubelet[1421]: I1115 09:43:48.716034    1421 scope.go:117] "RemoveContainer" containerID="0c78820ce89a4dba2b18326538b233844128c3d857fa2c9b4f40a2b79dc6d651"
	Nov 15 09:43:48 addons-209049 kubelet[1421]: E1115 09:43:48.818872    1421 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/fc344ac94547164484ea1ce1144804db0f075a395900c398ace4ca63a1b1bc27/diff" to get inode usage: stat /var/lib/containers/storage/overlay/fc344ac94547164484ea1ce1144804db0f075a395900c398ace4ca63a1b1bc27/diff: no such file or directory, extraDiskErr: <nil>
	Nov 15 09:43:48 addons-209049 kubelet[1421]: E1115 09:43:48.838275    1421 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/149989e1bf33948c1acd48cce9fb3021cd8b32a2f71c38bd402176c31aa83f06/diff" to get inode usage: stat /var/lib/containers/storage/overlay/149989e1bf33948c1acd48cce9fb3021cd8b32a2f71c38bd402176c31aa83f06/diff: no such file or directory, extraDiskErr: <nil>
	Nov 15 09:43:48 addons-209049 kubelet[1421]: I1115 09:43:48.986548    1421 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Nov 15 09:43:48 addons-209049 kubelet[1421]: I1115 09:43:48.986589    1421 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Nov 15 09:43:50 addons-209049 kubelet[1421]: I1115 09:43:50.587159    1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-jr55m" podStartSLOduration=70.293616481 podStartE2EDuration="1m46.587135977s" podCreationTimestamp="2025-11-15 09:42:04 +0000 UTC" firstStartedPulling="2025-11-15 09:43:08.812540966 +0000 UTC m=+80.175397130" lastFinishedPulling="2025-11-15 09:43:45.106060459 +0000 UTC m=+116.468916626" observedRunningTime="2025-11-15 09:43:45.599369318 +0000 UTC m=+116.962225490" watchObservedRunningTime="2025-11-15 09:43:50.587135977 +0000 UTC m=+121.949992160"
	Nov 15 09:43:51 addons-209049 kubelet[1421]: I1115 09:43:51.634041    1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-n2grt" podStartSLOduration=2.349787119 podStartE2EDuration="1m15.634017942s" podCreationTimestamp="2025-11-15 09:42:36 +0000 UTC" firstStartedPulling="2025-11-15 09:42:37.391817667 +0000 UTC m=+48.754673829" lastFinishedPulling="2025-11-15 09:43:50.676048481 +0000 UTC m=+122.038904652" observedRunningTime="2025-11-15 09:43:51.63258018 +0000 UTC m=+122.995436361" watchObservedRunningTime="2025-11-15 09:43:51.634017942 +0000 UTC m=+122.996874124"
	Nov 15 09:43:54 addons-209049 kubelet[1421]: I1115 09:43:54.099914    1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dr8v\" (UniqueName: \"kubernetes.io/projected/4d4b92e5-3f08-48f7-845f-c61019032b56-kube-api-access-8dr8v\") pod \"busybox\" (UID: \"4d4b92e5-3f08-48f7-845f-c61019032b56\") " pod="default/busybox"
	Nov 15 09:43:54 addons-209049 kubelet[1421]: I1115 09:43:54.099995    1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/4d4b92e5-3f08-48f7-845f-c61019032b56-gcp-creds\") pod \"busybox\" (UID: \"4d4b92e5-3f08-48f7-845f-c61019032b56\") " pod="default/busybox"
	Nov 15 09:43:54 addons-209049 kubelet[1421]: W1115 09:43:54.311534    1421 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/95837a795344356a1a114ddf87e92b007ae01ddbe0deac5a406c954fdd9cec8c/crio-78000eed370273ebbb25e18705d33434cd226df49c1a2796f3c8a37248caa533 WatchSource:0}: Error finding container 78000eed370273ebbb25e18705d33434cd226df49c1a2796f3c8a37248caa533: Status 404 returned error can't find the container with id 78000eed370273ebbb25e18705d33434cd226df49c1a2796f3c8a37248caa533
	Nov 15 09:43:59 addons-209049 kubelet[1421]: I1115 09:43:59.660628    1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.300820535 podStartE2EDuration="6.660607638s" podCreationTimestamp="2025-11-15 09:43:53 +0000 UTC" firstStartedPulling="2025-11-15 09:43:54.314083184 +0000 UTC m=+125.676939345" lastFinishedPulling="2025-11-15 09:43:58.673870269 +0000 UTC m=+130.036726448" observedRunningTime="2025-11-15 09:43:59.659608029 +0000 UTC m=+131.022464210" watchObservedRunningTime="2025-11-15 09:43:59.660607638 +0000 UTC m=+131.023463820"
	Nov 15 09:44:07 addons-209049 kubelet[1421]: I1115 09:44:07.001389    1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/c660b281-cc4d-4a64-8e9e-f033b9a60fe5-script\") pod \"helper-pod-create-pvc-8bd24f7f-8ecb-40a6-a860-d94dacd37833\" (UID: \"c660b281-cc4d-4a64-8e9e-f033b9a60fe5\") " pod="local-path-storage/helper-pod-create-pvc-8bd24f7f-8ecb-40a6-a860-d94dacd37833"
	Nov 15 09:44:07 addons-209049 kubelet[1421]: I1115 09:44:07.001446    1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/c660b281-cc4d-4a64-8e9e-f033b9a60fe5-data\") pod \"helper-pod-create-pvc-8bd24f7f-8ecb-40a6-a860-d94dacd37833\" (UID: \"c660b281-cc4d-4a64-8e9e-f033b9a60fe5\") " pod="local-path-storage/helper-pod-create-pvc-8bd24f7f-8ecb-40a6-a860-d94dacd37833"
	Nov 15 09:44:07 addons-209049 kubelet[1421]: I1115 09:44:07.001577    1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdvsv\" (UniqueName: \"kubernetes.io/projected/c660b281-cc4d-4a64-8e9e-f033b9a60fe5-kube-api-access-cdvsv\") pod \"helper-pod-create-pvc-8bd24f7f-8ecb-40a6-a860-d94dacd37833\" (UID: \"c660b281-cc4d-4a64-8e9e-f033b9a60fe5\") " pod="local-path-storage/helper-pod-create-pvc-8bd24f7f-8ecb-40a6-a860-d94dacd37833"
	Nov 15 09:44:07 addons-209049 kubelet[1421]: I1115 09:44:07.001614    1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/c660b281-cc4d-4a64-8e9e-f033b9a60fe5-gcp-creds\") pod \"helper-pod-create-pvc-8bd24f7f-8ecb-40a6-a860-d94dacd37833\" (UID: \"c660b281-cc4d-4a64-8e9e-f033b9a60fe5\") " pod="local-path-storage/helper-pod-create-pvc-8bd24f7f-8ecb-40a6-a860-d94dacd37833"
	Nov 15 09:44:07 addons-209049 kubelet[1421]: W1115 09:44:07.179547    1421 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/95837a795344356a1a114ddf87e92b007ae01ddbe0deac5a406c954fdd9cec8c/crio-1dc104aa61d84a6840601cef1a8dfd3286c5bc0e625055309d80b25ae97ac8de WatchSource:0}: Error finding container 1dc104aa61d84a6840601cef1a8dfd3286c5bc0e625055309d80b25ae97ac8de: Status 404 returned error can't find the container with id 1dc104aa61d84a6840601cef1a8dfd3286c5bc0e625055309d80b25ae97ac8de
	
	
	==> storage-provisioner [1bdef9117bea1f9beb91b7cd74c36b09d5d21005cb342d6d96771811af8c1a65] <==
	W1115 09:43:43.707145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:43:45.776494       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:43:45.782350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:43:47.786318       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:43:47.792218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:43:49.795375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:43:49.799456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:43:51.802782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:43:51.806663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:43:53.809822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:43:53.814529       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:43:55.817492       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:43:55.821288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:43:57.825025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:43:57.831048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:43:59.833890       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:43:59.838095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:44:01.841208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:44:01.847276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:44:03.850385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:44:03.855396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:44:05.859152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:44:05.877000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:44:07.881031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:44:07.885590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-209049 -n addons-209049
helpers_test.go:269: (dbg) Run:  kubectl --context addons-209049 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: test-local-path ingress-nginx-admission-create-fxrnb ingress-nginx-admission-patch-d5h7k registry-creds-764b6fb674-d6rh5 helper-pod-create-pvc-8bd24f7f-8ecb-40a6-a860-d94dacd37833
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-209049 describe pod test-local-path ingress-nginx-admission-create-fxrnb ingress-nginx-admission-patch-d5h7k registry-creds-764b6fb674-d6rh5 helper-pod-create-pvc-8bd24f7f-8ecb-40a6-a860-d94dacd37833
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-209049 describe pod test-local-path ingress-nginx-admission-create-fxrnb ingress-nginx-admission-patch-d5h7k registry-creds-764b6fb674-d6rh5 helper-pod-create-pvc-8bd24f7f-8ecb-40a6-a860-d94dacd37833: exit status 1 (79.28311ms)

                                                
                                                
-- stdout --
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cbgcv (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-cbgcv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:            <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-fxrnb" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-d5h7k" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-d6rh5" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-8bd24f7f-8ecb-40a6-a860-d94dacd37833" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-209049 describe pod test-local-path ingress-nginx-admission-create-fxrnb ingress-nginx-admission-patch-d5h7k registry-creds-764b6fb674-d6rh5 helper-pod-create-pvc-8bd24f7f-8ecb-40a6-a860-d94dacd37833: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-209049 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-209049 addons disable headlamp --alsologtostderr -v=1: exit status 11 (252.90192ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:44:08.998313   69939 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:44:08.998638   69939 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:44:08.998651   69939 out.go:374] Setting ErrFile to fd 2...
	I1115 09:44:08.998658   69939 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:44:08.998933   69939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 09:44:08.999259   69939 mustload.go:66] Loading cluster: addons-209049
	I1115 09:44:08.999679   69939 config.go:182] Loaded profile config "addons-209049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:44:08.999700   69939 addons.go:607] checking whether the cluster is paused
	I1115 09:44:08.999842   69939 config.go:182] Loaded profile config "addons-209049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:44:08.999862   69939 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:44:09.000403   69939 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:44:09.023818   69939 ssh_runner.go:195] Run: systemctl --version
	I1115 09:44:09.023880   69939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:44:09.042424   69939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:44:09.136465   69939 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:44:09.136572   69939 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:44:09.164783   69939 cri.go:89] found id: "3caa65a862513e243b22e76e1bbf885e3ac037a176ca99b12f91d698216975db"
	I1115 09:44:09.164809   69939 cri.go:89] found id: "b0ff95e639d4d5eb2c0d77d21ada78f8cd1fb53b0f074a034a43d45e26991503"
	I1115 09:44:09.164815   69939 cri.go:89] found id: "c9bdb51e12a14483aa19c676698377a0b60b77720f0628965eea3278bbb8e041"
	I1115 09:44:09.164820   69939 cri.go:89] found id: "e7dcd097399e865e5ae5f0e729bdc0f1048db8b421e3cb038204849e0f017ce4"
	I1115 09:44:09.164824   69939 cri.go:89] found id: "acf4ca35c9f55a5b7ca511482b0cfe1fd19ad4da286e419734932ebc92d08987"
	I1115 09:44:09.164829   69939 cri.go:89] found id: "171c9fa6da7f829033739f1a599a437a0f9fb0490f6e35cb28121675ec6610d5"
	I1115 09:44:09.164834   69939 cri.go:89] found id: "ff456e58c6b5371ae6d2128ce2bd4b5688b2bdd7c3b3eb2b96101bb1b58fe2c5"
	I1115 09:44:09.164837   69939 cri.go:89] found id: "ee4946da5ae0da768ba560dc1b29498a8426405bac06d4d18f0a838d40c80725"
	I1115 09:44:09.164840   69939 cri.go:89] found id: "103636dd26caf3b9549cd406346b20c65628300e437e7be4616d3f0077c744bd"
	I1115 09:44:09.164855   69939 cri.go:89] found id: "49680c8d74f4f54e56ef4ffd5f2ca74f9034bbb45596e528bea07a9727288e4c"
	I1115 09:44:09.164860   69939 cri.go:89] found id: "b667435537d1ec9aa535058d15e290f6af371012bddf3d30095187b5c578a6f7"
	I1115 09:44:09.164863   69939 cri.go:89] found id: "0eebfaee45b609055247f2529eb805a5c5b268a2be72bc3dc72864bae3e6b98f"
	I1115 09:44:09.164867   69939 cri.go:89] found id: "2e0b28ec2dfa3079760cebaa9bb76189b77530f1910306d850dbf67dea10ec99"
	I1115 09:44:09.164872   69939 cri.go:89] found id: "4c734c004dda05124ee316d7d4e813486d10a56625af5fde238c99fe3c7fcb59"
	I1115 09:44:09.164877   69939 cri.go:89] found id: "db6577073b2a6a2e7ebbec11d5b9ea8189c70d1adfad3656b9c47577a4cda077"
	I1115 09:44:09.164885   69939 cri.go:89] found id: "ed655dc7f306b7a351e2adf5b80bffbf7072a9bb72cdb5eac89f029f44b5af1e"
	I1115 09:44:09.164892   69939 cri.go:89] found id: "abed161df6f25739be955e904c69c8cc237aee9607441efb5b85c298a8066bc3"
	I1115 09:44:09.164898   69939 cri.go:89] found id: "1bdef9117bea1f9beb91b7cd74c36b09d5d21005cb342d6d96771811af8c1a65"
	I1115 09:44:09.164902   69939 cri.go:89] found id: "c47933319f25ea439a895d8a214194c80c6c0e5be436f0cda95e520fb61fcc42"
	I1115 09:44:09.164905   69939 cri.go:89] found id: "bf4bdcddcb90f809aee08bb750d5e3282b1083556ad258032140c881f019a634"
	I1115 09:44:09.164909   69939 cri.go:89] found id: "2d0c6bfa456fdc1fae3dbb4e08aebd592ec5b3e4b99f82367927552be2506b4b"
	I1115 09:44:09.164913   69939 cri.go:89] found id: "fc273347a0fa0ac4a76f1735e0dc29501058daa141499c42e0703edf64d3cc86"
	I1115 09:44:09.164917   69939 cri.go:89] found id: "8f354571302cfa24ed2fcb9dc1a59908900201f2ee8a9f0985e0cd698988ceff"
	I1115 09:44:09.164931   69939 cri.go:89] found id: "1f26d41b1ae72e1dc34f54376e004475d2c4c414c8b70169e5095a83b0a7212d"
	I1115 09:44:09.164939   69939 cri.go:89] found id: ""
	I1115 09:44:09.165002   69939 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:44:09.179174   69939 out.go:203] 
	W1115 09:44:09.180341   69939 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:44:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:44:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:44:09.180360   69939 out.go:285] * 
	* 
	W1115 09:44:09.184608   69939 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:44:09.186243   69939 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-209049 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.61s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-6f9fcf858b-7z68m" [2dc77ded-d9b8-443d-ad09-c06b1be01baa] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00283796s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-209049 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-209049 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (240.217285ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:44:14.252928   70215 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:44:14.253043   70215 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:44:14.253049   70215 out.go:374] Setting ErrFile to fd 2...
	I1115 09:44:14.253053   70215 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:44:14.253265   70215 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 09:44:14.253505   70215 mustload.go:66] Loading cluster: addons-209049
	I1115 09:44:14.253843   70215 config.go:182] Loaded profile config "addons-209049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:44:14.253859   70215 addons.go:607] checking whether the cluster is paused
	I1115 09:44:14.253978   70215 config.go:182] Loaded profile config "addons-209049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:44:14.253998   70215 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:44:14.254494   70215 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:44:14.272574   70215 ssh_runner.go:195] Run: systemctl --version
	I1115 09:44:14.272629   70215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:44:14.291007   70215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:44:14.383407   70215 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:44:14.383502   70215 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:44:14.413636   70215 cri.go:89] found id: "3caa65a862513e243b22e76e1bbf885e3ac037a176ca99b12f91d698216975db"
	I1115 09:44:14.413660   70215 cri.go:89] found id: "b0ff95e639d4d5eb2c0d77d21ada78f8cd1fb53b0f074a034a43d45e26991503"
	I1115 09:44:14.413664   70215 cri.go:89] found id: "c9bdb51e12a14483aa19c676698377a0b60b77720f0628965eea3278bbb8e041"
	I1115 09:44:14.413667   70215 cri.go:89] found id: "e7dcd097399e865e5ae5f0e729bdc0f1048db8b421e3cb038204849e0f017ce4"
	I1115 09:44:14.413670   70215 cri.go:89] found id: "acf4ca35c9f55a5b7ca511482b0cfe1fd19ad4da286e419734932ebc92d08987"
	I1115 09:44:14.413673   70215 cri.go:89] found id: "171c9fa6da7f829033739f1a599a437a0f9fb0490f6e35cb28121675ec6610d5"
	I1115 09:44:14.413676   70215 cri.go:89] found id: "ff456e58c6b5371ae6d2128ce2bd4b5688b2bdd7c3b3eb2b96101bb1b58fe2c5"
	I1115 09:44:14.413678   70215 cri.go:89] found id: "ee4946da5ae0da768ba560dc1b29498a8426405bac06d4d18f0a838d40c80725"
	I1115 09:44:14.413681   70215 cri.go:89] found id: "103636dd26caf3b9549cd406346b20c65628300e437e7be4616d3f0077c744bd"
	I1115 09:44:14.413685   70215 cri.go:89] found id: "49680c8d74f4f54e56ef4ffd5f2ca74f9034bbb45596e528bea07a9727288e4c"
	I1115 09:44:14.413688   70215 cri.go:89] found id: "b667435537d1ec9aa535058d15e290f6af371012bddf3d30095187b5c578a6f7"
	I1115 09:44:14.413690   70215 cri.go:89] found id: "0eebfaee45b609055247f2529eb805a5c5b268a2be72bc3dc72864bae3e6b98f"
	I1115 09:44:14.413694   70215 cri.go:89] found id: "2e0b28ec2dfa3079760cebaa9bb76189b77530f1910306d850dbf67dea10ec99"
	I1115 09:44:14.413699   70215 cri.go:89] found id: "4c734c004dda05124ee316d7d4e813486d10a56625af5fde238c99fe3c7fcb59"
	I1115 09:44:14.413704   70215 cri.go:89] found id: "db6577073b2a6a2e7ebbec11d5b9ea8189c70d1adfad3656b9c47577a4cda077"
	I1115 09:44:14.413721   70215 cri.go:89] found id: "ed655dc7f306b7a351e2adf5b80bffbf7072a9bb72cdb5eac89f029f44b5af1e"
	I1115 09:44:14.413734   70215 cri.go:89] found id: "abed161df6f25739be955e904c69c8cc237aee9607441efb5b85c298a8066bc3"
	I1115 09:44:14.413742   70215 cri.go:89] found id: "1bdef9117bea1f9beb91b7cd74c36b09d5d21005cb342d6d96771811af8c1a65"
	I1115 09:44:14.413747   70215 cri.go:89] found id: "c47933319f25ea439a895d8a214194c80c6c0e5be436f0cda95e520fb61fcc42"
	I1115 09:44:14.413750   70215 cri.go:89] found id: "bf4bdcddcb90f809aee08bb750d5e3282b1083556ad258032140c881f019a634"
	I1115 09:44:14.413757   70215 cri.go:89] found id: "2d0c6bfa456fdc1fae3dbb4e08aebd592ec5b3e4b99f82367927552be2506b4b"
	I1115 09:44:14.413763   70215 cri.go:89] found id: "fc273347a0fa0ac4a76f1735e0dc29501058daa141499c42e0703edf64d3cc86"
	I1115 09:44:14.413766   70215 cri.go:89] found id: "8f354571302cfa24ed2fcb9dc1a59908900201f2ee8a9f0985e0cd698988ceff"
	I1115 09:44:14.413768   70215 cri.go:89] found id: "1f26d41b1ae72e1dc34f54376e004475d2c4c414c8b70169e5095a83b0a7212d"
	I1115 09:44:14.413771   70215 cri.go:89] found id: ""
	I1115 09:44:14.413816   70215 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:44:14.427927   70215 out.go:203] 
	W1115 09:44:14.429213   70215 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:44:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:44:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:44:14.429234   70215 out.go:285] * 
	* 
	W1115 09:44:14.433576   70215 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:44:14.434922   70215 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-209049 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.14s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-209049 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-209049 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-209049 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-209049 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-209049 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-209049 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-209049 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-209049 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [d896e1c2-1b58-43b3-b92f-2ed976af0390] Pending
helpers_test.go:352: "test-local-path" [d896e1c2-1b58-43b3-b92f-2ed976af0390] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [d896e1c2-1b58-43b3-b92f-2ed976af0390] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [d896e1c2-1b58-43b3-b92f-2ed976af0390] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003060852s
addons_test.go:967: (dbg) Run:  kubectl --context addons-209049 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-209049 ssh "cat /opt/local-path-provisioner/pvc-8bd24f7f-8ecb-40a6-a860-d94dacd37833_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-209049 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-209049 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-209049 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-209049 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (257.523188ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:44:16.521978   70460 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:44:16.522236   70460 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:44:16.522246   70460 out.go:374] Setting ErrFile to fd 2...
	I1115 09:44:16.522249   70460 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:44:16.522418   70460 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 09:44:16.522665   70460 mustload.go:66] Loading cluster: addons-209049
	I1115 09:44:16.523020   70460 config.go:182] Loaded profile config "addons-209049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:44:16.523042   70460 addons.go:607] checking whether the cluster is paused
	I1115 09:44:16.523133   70460 config.go:182] Loaded profile config "addons-209049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:44:16.523146   70460 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:44:16.523500   70460 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:44:16.541089   70460 ssh_runner.go:195] Run: systemctl --version
	I1115 09:44:16.541150   70460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:44:16.560145   70460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:44:16.654918   70460 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:44:16.655043   70460 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:44:16.687935   70460 cri.go:89] found id: "3caa65a862513e243b22e76e1bbf885e3ac037a176ca99b12f91d698216975db"
	I1115 09:44:16.687971   70460 cri.go:89] found id: "b0ff95e639d4d5eb2c0d77d21ada78f8cd1fb53b0f074a034a43d45e26991503"
	I1115 09:44:16.687978   70460 cri.go:89] found id: "c9bdb51e12a14483aa19c676698377a0b60b77720f0628965eea3278bbb8e041"
	I1115 09:44:16.687983   70460 cri.go:89] found id: "e7dcd097399e865e5ae5f0e729bdc0f1048db8b421e3cb038204849e0f017ce4"
	I1115 09:44:16.687988   70460 cri.go:89] found id: "acf4ca35c9f55a5b7ca511482b0cfe1fd19ad4da286e419734932ebc92d08987"
	I1115 09:44:16.687993   70460 cri.go:89] found id: "171c9fa6da7f829033739f1a599a437a0f9fb0490f6e35cb28121675ec6610d5"
	I1115 09:44:16.687998   70460 cri.go:89] found id: "ff456e58c6b5371ae6d2128ce2bd4b5688b2bdd7c3b3eb2b96101bb1b58fe2c5"
	I1115 09:44:16.688002   70460 cri.go:89] found id: "ee4946da5ae0da768ba560dc1b29498a8426405bac06d4d18f0a838d40c80725"
	I1115 09:44:16.688006   70460 cri.go:89] found id: "103636dd26caf3b9549cd406346b20c65628300e437e7be4616d3f0077c744bd"
	I1115 09:44:16.688014   70460 cri.go:89] found id: "49680c8d74f4f54e56ef4ffd5f2ca74f9034bbb45596e528bea07a9727288e4c"
	I1115 09:44:16.688017   70460 cri.go:89] found id: "b667435537d1ec9aa535058d15e290f6af371012bddf3d30095187b5c578a6f7"
	I1115 09:44:16.688021   70460 cri.go:89] found id: "0eebfaee45b609055247f2529eb805a5c5b268a2be72bc3dc72864bae3e6b98f"
	I1115 09:44:16.688030   70460 cri.go:89] found id: "2e0b28ec2dfa3079760cebaa9bb76189b77530f1910306d850dbf67dea10ec99"
	I1115 09:44:16.688035   70460 cri.go:89] found id: "4c734c004dda05124ee316d7d4e813486d10a56625af5fde238c99fe3c7fcb59"
	I1115 09:44:16.688043   70460 cri.go:89] found id: "db6577073b2a6a2e7ebbec11d5b9ea8189c70d1adfad3656b9c47577a4cda077"
	I1115 09:44:16.688055   70460 cri.go:89] found id: "ed655dc7f306b7a351e2adf5b80bffbf7072a9bb72cdb5eac89f029f44b5af1e"
	I1115 09:44:16.688063   70460 cri.go:89] found id: "abed161df6f25739be955e904c69c8cc237aee9607441efb5b85c298a8066bc3"
	I1115 09:44:16.688069   70460 cri.go:89] found id: "1bdef9117bea1f9beb91b7cd74c36b09d5d21005cb342d6d96771811af8c1a65"
	I1115 09:44:16.688073   70460 cri.go:89] found id: "c47933319f25ea439a895d8a214194c80c6c0e5be436f0cda95e520fb61fcc42"
	I1115 09:44:16.688078   70460 cri.go:89] found id: "bf4bdcddcb90f809aee08bb750d5e3282b1083556ad258032140c881f019a634"
	I1115 09:44:16.688082   70460 cri.go:89] found id: "2d0c6bfa456fdc1fae3dbb4e08aebd592ec5b3e4b99f82367927552be2506b4b"
	I1115 09:44:16.688086   70460 cri.go:89] found id: "fc273347a0fa0ac4a76f1735e0dc29501058daa141499c42e0703edf64d3cc86"
	I1115 09:44:16.688090   70460 cri.go:89] found id: "8f354571302cfa24ed2fcb9dc1a59908900201f2ee8a9f0985e0cd698988ceff"
	I1115 09:44:16.688094   70460 cri.go:89] found id: "1f26d41b1ae72e1dc34f54376e004475d2c4c414c8b70169e5095a83b0a7212d"
	I1115 09:44:16.688098   70460 cri.go:89] found id: ""
	I1115 09:44:16.688161   70460 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:44:16.705008   70460 out.go:203] 
	W1115 09:44:16.706328   70460 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:44:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:44:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:44:16.706403   70460 out.go:285] * 
	* 
	W1115 09:44:16.712843   70460 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:44:16.714590   70460 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-209049 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (10.14s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-qtrg4" [2ece1371-7279-4f25-ad2d-270518aadb18] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003636779s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-209049 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-209049 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (549.86726ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:44:11.641132   70078 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:44:11.641384   70078 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:44:11.641394   70078 out.go:374] Setting ErrFile to fd 2...
	I1115 09:44:11.641398   70078 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:44:11.641619   70078 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 09:44:11.641890   70078 mustload.go:66] Loading cluster: addons-209049
	I1115 09:44:11.642239   70078 config.go:182] Loaded profile config "addons-209049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:44:11.642254   70078 addons.go:607] checking whether the cluster is paused
	I1115 09:44:11.642335   70078 config.go:182] Loaded profile config "addons-209049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:44:11.642347   70078 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:44:11.642749   70078 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:44:11.661672   70078 ssh_runner.go:195] Run: systemctl --version
	I1115 09:44:11.661726   70078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:44:11.679920   70078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:44:11.773600   70078 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:44:11.773664   70078 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:44:11.805781   70078 cri.go:89] found id: "3caa65a862513e243b22e76e1bbf885e3ac037a176ca99b12f91d698216975db"
	I1115 09:44:11.805820   70078 cri.go:89] found id: "b0ff95e639d4d5eb2c0d77d21ada78f8cd1fb53b0f074a034a43d45e26991503"
	I1115 09:44:11.805824   70078 cri.go:89] found id: "c9bdb51e12a14483aa19c676698377a0b60b77720f0628965eea3278bbb8e041"
	I1115 09:44:11.805827   70078 cri.go:89] found id: "e7dcd097399e865e5ae5f0e729bdc0f1048db8b421e3cb038204849e0f017ce4"
	I1115 09:44:11.805829   70078 cri.go:89] found id: "acf4ca35c9f55a5b7ca511482b0cfe1fd19ad4da286e419734932ebc92d08987"
	I1115 09:44:11.805834   70078 cri.go:89] found id: "171c9fa6da7f829033739f1a599a437a0f9fb0490f6e35cb28121675ec6610d5"
	I1115 09:44:11.805836   70078 cri.go:89] found id: "ff456e58c6b5371ae6d2128ce2bd4b5688b2bdd7c3b3eb2b96101bb1b58fe2c5"
	I1115 09:44:11.805838   70078 cri.go:89] found id: "ee4946da5ae0da768ba560dc1b29498a8426405bac06d4d18f0a838d40c80725"
	I1115 09:44:11.805841   70078 cri.go:89] found id: "103636dd26caf3b9549cd406346b20c65628300e437e7be4616d3f0077c744bd"
	I1115 09:44:11.805850   70078 cri.go:89] found id: "49680c8d74f4f54e56ef4ffd5f2ca74f9034bbb45596e528bea07a9727288e4c"
	I1115 09:44:11.805853   70078 cri.go:89] found id: "b667435537d1ec9aa535058d15e290f6af371012bddf3d30095187b5c578a6f7"
	I1115 09:44:11.805855   70078 cri.go:89] found id: "0eebfaee45b609055247f2529eb805a5c5b268a2be72bc3dc72864bae3e6b98f"
	I1115 09:44:11.805858   70078 cri.go:89] found id: "2e0b28ec2dfa3079760cebaa9bb76189b77530f1910306d850dbf67dea10ec99"
	I1115 09:44:11.805860   70078 cri.go:89] found id: "4c734c004dda05124ee316d7d4e813486d10a56625af5fde238c99fe3c7fcb59"
	I1115 09:44:11.805862   70078 cri.go:89] found id: "db6577073b2a6a2e7ebbec11d5b9ea8189c70d1adfad3656b9c47577a4cda077"
	I1115 09:44:11.805869   70078 cri.go:89] found id: "ed655dc7f306b7a351e2adf5b80bffbf7072a9bb72cdb5eac89f029f44b5af1e"
	I1115 09:44:11.805875   70078 cri.go:89] found id: "abed161df6f25739be955e904c69c8cc237aee9607441efb5b85c298a8066bc3"
	I1115 09:44:11.805879   70078 cri.go:89] found id: "1bdef9117bea1f9beb91b7cd74c36b09d5d21005cb342d6d96771811af8c1a65"
	I1115 09:44:11.805882   70078 cri.go:89] found id: "c47933319f25ea439a895d8a214194c80c6c0e5be436f0cda95e520fb61fcc42"
	I1115 09:44:11.805884   70078 cri.go:89] found id: "bf4bdcddcb90f809aee08bb750d5e3282b1083556ad258032140c881f019a634"
	I1115 09:44:11.805887   70078 cri.go:89] found id: "2d0c6bfa456fdc1fae3dbb4e08aebd592ec5b3e4b99f82367927552be2506b4b"
	I1115 09:44:11.805889   70078 cri.go:89] found id: "fc273347a0fa0ac4a76f1735e0dc29501058daa141499c42e0703edf64d3cc86"
	I1115 09:44:11.805891   70078 cri.go:89] found id: "8f354571302cfa24ed2fcb9dc1a59908900201f2ee8a9f0985e0cd698988ceff"
	I1115 09:44:11.805894   70078 cri.go:89] found id: "1f26d41b1ae72e1dc34f54376e004475d2c4c414c8b70169e5095a83b0a7212d"
	I1115 09:44:11.805896   70078 cri.go:89] found id: ""
	I1115 09:44:11.805943   70078 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:44:11.906357   70078 out.go:203] 
	W1115 09:44:11.980096   70078 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:44:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:44:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:44:11.980132   70078 out.go:285] * 
	* 
	W1115 09:44:11.985492   70078 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:44:12.061392   70078 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-209049 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.56s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-5kfrb" [fc634565-5fff-491f-8555-7199795097b2] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003586029s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-209049 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-209049 addons disable yakd --alsologtostderr -v=1: exit status 11 (249.561242ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:44:19.502442   70716 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:44:19.502549   70716 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:44:19.502557   70716 out.go:374] Setting ErrFile to fd 2...
	I1115 09:44:19.502560   70716 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:44:19.502752   70716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 09:44:19.503018   70716 mustload.go:66] Loading cluster: addons-209049
	I1115 09:44:19.503363   70716 config.go:182] Loaded profile config "addons-209049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:44:19.503380   70716 addons.go:607] checking whether the cluster is paused
	I1115 09:44:19.503464   70716 config.go:182] Loaded profile config "addons-209049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:44:19.503476   70716 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:44:19.503849   70716 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:44:19.523528   70716 ssh_runner.go:195] Run: systemctl --version
	I1115 09:44:19.523586   70716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:44:19.543766   70716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:44:19.637920   70716 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:44:19.638020   70716 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:44:19.666769   70716 cri.go:89] found id: "3caa65a862513e243b22e76e1bbf885e3ac037a176ca99b12f91d698216975db"
	I1115 09:44:19.666793   70716 cri.go:89] found id: "b0ff95e639d4d5eb2c0d77d21ada78f8cd1fb53b0f074a034a43d45e26991503"
	I1115 09:44:19.666800   70716 cri.go:89] found id: "c9bdb51e12a14483aa19c676698377a0b60b77720f0628965eea3278bbb8e041"
	I1115 09:44:19.666805   70716 cri.go:89] found id: "e7dcd097399e865e5ae5f0e729bdc0f1048db8b421e3cb038204849e0f017ce4"
	I1115 09:44:19.666809   70716 cri.go:89] found id: "acf4ca35c9f55a5b7ca511482b0cfe1fd19ad4da286e419734932ebc92d08987"
	I1115 09:44:19.666815   70716 cri.go:89] found id: "171c9fa6da7f829033739f1a599a437a0f9fb0490f6e35cb28121675ec6610d5"
	I1115 09:44:19.666820   70716 cri.go:89] found id: "ff456e58c6b5371ae6d2128ce2bd4b5688b2bdd7c3b3eb2b96101bb1b58fe2c5"
	I1115 09:44:19.666824   70716 cri.go:89] found id: "ee4946da5ae0da768ba560dc1b29498a8426405bac06d4d18f0a838d40c80725"
	I1115 09:44:19.666828   70716 cri.go:89] found id: "103636dd26caf3b9549cd406346b20c65628300e437e7be4616d3f0077c744bd"
	I1115 09:44:19.666836   70716 cri.go:89] found id: "49680c8d74f4f54e56ef4ffd5f2ca74f9034bbb45596e528bea07a9727288e4c"
	I1115 09:44:19.666840   70716 cri.go:89] found id: "b667435537d1ec9aa535058d15e290f6af371012bddf3d30095187b5c578a6f7"
	I1115 09:44:19.666844   70716 cri.go:89] found id: "0eebfaee45b609055247f2529eb805a5c5b268a2be72bc3dc72864bae3e6b98f"
	I1115 09:44:19.666848   70716 cri.go:89] found id: "2e0b28ec2dfa3079760cebaa9bb76189b77530f1910306d850dbf67dea10ec99"
	I1115 09:44:19.666851   70716 cri.go:89] found id: "4c734c004dda05124ee316d7d4e813486d10a56625af5fde238c99fe3c7fcb59"
	I1115 09:44:19.666854   70716 cri.go:89] found id: "db6577073b2a6a2e7ebbec11d5b9ea8189c70d1adfad3656b9c47577a4cda077"
	I1115 09:44:19.666874   70716 cri.go:89] found id: "ed655dc7f306b7a351e2adf5b80bffbf7072a9bb72cdb5eac89f029f44b5af1e"
	I1115 09:44:19.666886   70716 cri.go:89] found id: "abed161df6f25739be955e904c69c8cc237aee9607441efb5b85c298a8066bc3"
	I1115 09:44:19.666893   70716 cri.go:89] found id: "1bdef9117bea1f9beb91b7cd74c36b09d5d21005cb342d6d96771811af8c1a65"
	I1115 09:44:19.666897   70716 cri.go:89] found id: "c47933319f25ea439a895d8a214194c80c6c0e5be436f0cda95e520fb61fcc42"
	I1115 09:44:19.666901   70716 cri.go:89] found id: "bf4bdcddcb90f809aee08bb750d5e3282b1083556ad258032140c881f019a634"
	I1115 09:44:19.666909   70716 cri.go:89] found id: "2d0c6bfa456fdc1fae3dbb4e08aebd592ec5b3e4b99f82367927552be2506b4b"
	I1115 09:44:19.666913   70716 cri.go:89] found id: "fc273347a0fa0ac4a76f1735e0dc29501058daa141499c42e0703edf64d3cc86"
	I1115 09:44:19.666917   70716 cri.go:89] found id: "8f354571302cfa24ed2fcb9dc1a59908900201f2ee8a9f0985e0cd698988ceff"
	I1115 09:44:19.666921   70716 cri.go:89] found id: "1f26d41b1ae72e1dc34f54376e004475d2c4c414c8b70169e5095a83b0a7212d"
	I1115 09:44:19.666925   70716 cri.go:89] found id: ""
	I1115 09:44:19.666988   70716 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:44:19.681505   70716 out.go:203] 
	W1115 09:44:19.682780   70716 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:44:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:44:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:44:19.682805   70716 out.go:285] * 
	* 
	W1115 09:44:19.687423   70716 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:44:19.688987   70716 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-209049 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.25s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-zxglt" [2d4f6e85-2a3f-4fc9-85de-7a92b0cc0241] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003656469s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-209049 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-209049 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (242.062115ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:44:17.194741   70606 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:44:17.194912   70606 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:44:17.194924   70606 out.go:374] Setting ErrFile to fd 2...
	I1115 09:44:17.194927   70606 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:44:17.195144   70606 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 09:44:17.195422   70606 mustload.go:66] Loading cluster: addons-209049
	I1115 09:44:17.195751   70606 config.go:182] Loaded profile config "addons-209049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:44:17.195766   70606 addons.go:607] checking whether the cluster is paused
	I1115 09:44:17.195849   70606 config.go:182] Loaded profile config "addons-209049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:44:17.195862   70606 host.go:66] Checking if "addons-209049" exists ...
	I1115 09:44:17.196287   70606 cli_runner.go:164] Run: docker container inspect addons-209049 --format={{.State.Status}}
	I1115 09:44:17.213609   70606 ssh_runner.go:195] Run: systemctl --version
	I1115 09:44:17.213664   70606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-209049
	I1115 09:44:17.232045   70606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/addons-209049/id_rsa Username:docker}
	I1115 09:44:17.324564   70606 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:44:17.324658   70606 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:44:17.353608   70606 cri.go:89] found id: "3caa65a862513e243b22e76e1bbf885e3ac037a176ca99b12f91d698216975db"
	I1115 09:44:17.353663   70606 cri.go:89] found id: "b0ff95e639d4d5eb2c0d77d21ada78f8cd1fb53b0f074a034a43d45e26991503"
	I1115 09:44:17.353670   70606 cri.go:89] found id: "c9bdb51e12a14483aa19c676698377a0b60b77720f0628965eea3278bbb8e041"
	I1115 09:44:17.353675   70606 cri.go:89] found id: "e7dcd097399e865e5ae5f0e729bdc0f1048db8b421e3cb038204849e0f017ce4"
	I1115 09:44:17.353679   70606 cri.go:89] found id: "acf4ca35c9f55a5b7ca511482b0cfe1fd19ad4da286e419734932ebc92d08987"
	I1115 09:44:17.353685   70606 cri.go:89] found id: "171c9fa6da7f829033739f1a599a437a0f9fb0490f6e35cb28121675ec6610d5"
	I1115 09:44:17.353687   70606 cri.go:89] found id: "ff456e58c6b5371ae6d2128ce2bd4b5688b2bdd7c3b3eb2b96101bb1b58fe2c5"
	I1115 09:44:17.353690   70606 cri.go:89] found id: "ee4946da5ae0da768ba560dc1b29498a8426405bac06d4d18f0a838d40c80725"
	I1115 09:44:17.353692   70606 cri.go:89] found id: "103636dd26caf3b9549cd406346b20c65628300e437e7be4616d3f0077c744bd"
	I1115 09:44:17.353703   70606 cri.go:89] found id: "49680c8d74f4f54e56ef4ffd5f2ca74f9034bbb45596e528bea07a9727288e4c"
	I1115 09:44:17.353706   70606 cri.go:89] found id: "b667435537d1ec9aa535058d15e290f6af371012bddf3d30095187b5c578a6f7"
	I1115 09:44:17.353708   70606 cri.go:89] found id: "0eebfaee45b609055247f2529eb805a5c5b268a2be72bc3dc72864bae3e6b98f"
	I1115 09:44:17.353711   70606 cri.go:89] found id: "2e0b28ec2dfa3079760cebaa9bb76189b77530f1910306d850dbf67dea10ec99"
	I1115 09:44:17.353713   70606 cri.go:89] found id: "4c734c004dda05124ee316d7d4e813486d10a56625af5fde238c99fe3c7fcb59"
	I1115 09:44:17.353716   70606 cri.go:89] found id: "db6577073b2a6a2e7ebbec11d5b9ea8189c70d1adfad3656b9c47577a4cda077"
	I1115 09:44:17.353731   70606 cri.go:89] found id: "ed655dc7f306b7a351e2adf5b80bffbf7072a9bb72cdb5eac89f029f44b5af1e"
	I1115 09:44:17.353740   70606 cri.go:89] found id: "abed161df6f25739be955e904c69c8cc237aee9607441efb5b85c298a8066bc3"
	I1115 09:44:17.353744   70606 cri.go:89] found id: "1bdef9117bea1f9beb91b7cd74c36b09d5d21005cb342d6d96771811af8c1a65"
	I1115 09:44:17.353747   70606 cri.go:89] found id: "c47933319f25ea439a895d8a214194c80c6c0e5be436f0cda95e520fb61fcc42"
	I1115 09:44:17.353749   70606 cri.go:89] found id: "bf4bdcddcb90f809aee08bb750d5e3282b1083556ad258032140c881f019a634"
	I1115 09:44:17.353754   70606 cri.go:89] found id: "2d0c6bfa456fdc1fae3dbb4e08aebd592ec5b3e4b99f82367927552be2506b4b"
	I1115 09:44:17.353756   70606 cri.go:89] found id: "fc273347a0fa0ac4a76f1735e0dc29501058daa141499c42e0703edf64d3cc86"
	I1115 09:44:17.353758   70606 cri.go:89] found id: "8f354571302cfa24ed2fcb9dc1a59908900201f2ee8a9f0985e0cd698988ceff"
	I1115 09:44:17.353761   70606 cri.go:89] found id: "1f26d41b1ae72e1dc34f54376e004475d2c4c414c8b70169e5095a83b0a7212d"
	I1115 09:44:17.353763   70606 cri.go:89] found id: ""
	I1115 09:44:17.353816   70606 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:44:17.368546   70606 out.go:203] 
	W1115 09:44:17.369989   70606 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:44:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:44:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:44:17.370015   70606 out.go:285] * 
	* 
	W1115 09:44:17.374284   70606 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:44:17.375459   70606 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-209049 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-169872 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-169872 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-5rfjv" [63cde62e-14e4-46e5-9865-f48742429a74] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1115 09:50:15.985482   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-169872 -n functional-169872
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-15 10:00:16.04707259 +0000 UTC m=+1196.398703775
functional_test.go:1645: (dbg) Run:  kubectl --context functional-169872 describe po hello-node-connect-7d85dfc575-5rfjv -n default
functional_test.go:1645: (dbg) kubectl --context functional-169872 describe po hello-node-connect-7d85dfc575-5rfjv -n default:
Name:             hello-node-connect-7d85dfc575-5rfjv
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-169872/192.168.49.2
Start Time:       Sat, 15 Nov 2025 09:50:15 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sn8cm (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-sn8cm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-5rfjv to functional-169872
Normal   Pulling    6m50s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m50s (x5 over 9m48s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m50s (x5 over 9m48s)   kubelet            Error: ErrImagePull
Warning  Failed     4m47s (x20 over 9m48s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m33s (x21 over 9m48s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-169872 logs hello-node-connect-7d85dfc575-5rfjv -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-169872 logs hello-node-connect-7d85dfc575-5rfjv -n default: exit status 1 (69.742099ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-5rfjv" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-169872 logs hello-node-connect-7d85dfc575-5rfjv -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-169872 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-5rfjv
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-169872/192.168.49.2
Start Time:       Sat, 15 Nov 2025 09:50:15 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sn8cm (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-sn8cm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-5rfjv to functional-169872
Normal   Pulling    6m50s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m50s (x5 over 9m48s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m50s (x5 over 9m48s)   kubelet            Error: ErrImagePull
Warning  Failed     4m47s (x20 over 9m48s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m33s (x21 over 9m48s)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-169872 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-169872 logs -l app=hello-node-connect: exit status 1 (61.651646ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-5rfjv" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-169872 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-169872 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.96.198.194
IPs:                      10.96.198.194
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30785/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-169872
helpers_test.go:243: (dbg) docker inspect functional-169872:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0c32c83a984fa027be14d2cafbff4633934285aa1145fa35418fd46b18862f5b",
	        "Created": "2025-11-15T09:47:52.733269894Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 82430,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T09:47:52.763637859Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/0c32c83a984fa027be14d2cafbff4633934285aa1145fa35418fd46b18862f5b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0c32c83a984fa027be14d2cafbff4633934285aa1145fa35418fd46b18862f5b/hostname",
	        "HostsPath": "/var/lib/docker/containers/0c32c83a984fa027be14d2cafbff4633934285aa1145fa35418fd46b18862f5b/hosts",
	        "LogPath": "/var/lib/docker/containers/0c32c83a984fa027be14d2cafbff4633934285aa1145fa35418fd46b18862f5b/0c32c83a984fa027be14d2cafbff4633934285aa1145fa35418fd46b18862f5b-json.log",
	        "Name": "/functional-169872",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-169872:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-169872",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0c32c83a984fa027be14d2cafbff4633934285aa1145fa35418fd46b18862f5b",
	                "LowerDir": "/var/lib/docker/overlay2/e44f650dae5c0b7f1ac77234a37cbb0763d02f5c9c929858416e3e07d75c9baf-init/diff:/var/lib/docker/overlay2/507c85b6fb43d9c98216fac79d7a8d08dd20b2a63d0fbfc758336e5d38c04044/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e44f650dae5c0b7f1ac77234a37cbb0763d02f5c9c929858416e3e07d75c9baf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e44f650dae5c0b7f1ac77234a37cbb0763d02f5c9c929858416e3e07d75c9baf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e44f650dae5c0b7f1ac77234a37cbb0763d02f5c9c929858416e3e07d75c9baf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-169872",
	                "Source": "/var/lib/docker/volumes/functional-169872/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-169872",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-169872",
	                "name.minikube.sigs.k8s.io": "functional-169872",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6cd681bab7bd3b2fc0e1d6830ab4d377fae4e85a5a627e95ddf6aee3d22ae4ff",
	            "SandboxKey": "/var/run/docker/netns/6cd681bab7bd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-169872": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5fed5431a4e22226ba867e75fdeacb8dc90798bb800f272cb92e53ffa48b848a",
	                    "EndpointID": "f2926d365280869d5cc3407e6a8bd7f0b0ae75a38f9b2704513812af351505a8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "0e:27:d4:95:15:47",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-169872",
	                        "0c32c83a984f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-169872 -n functional-169872
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-169872 logs -n 25: (1.28152545s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ dashboard      │ --url --port 36195 -p functional-169872 --alsologtostderr -v=1                                                                                                  │ functional-169872 │ jenkins │ v1.37.0 │ 15 Nov 25 09:50 UTC │ 15 Nov 25 09:51 UTC │
	│ image          │ functional-169872 image load --daemon kicbase/echo-server:functional-169872 --alsologtostderr                                                                   │ functional-169872 │ jenkins │ v1.37.0 │ 15 Nov 25 09:50 UTC │ 15 Nov 25 09:50 UTC │
	│ image          │ functional-169872 image ls                                                                                                                                      │ functional-169872 │ jenkins │ v1.37.0 │ 15 Nov 25 09:50 UTC │ 15 Nov 25 09:50 UTC │
	│ image          │ functional-169872 image load --daemon kicbase/echo-server:functional-169872 --alsologtostderr                                                                   │ functional-169872 │ jenkins │ v1.37.0 │ 15 Nov 25 09:50 UTC │ 15 Nov 25 09:50 UTC │
	│ image          │ functional-169872 image ls                                                                                                                                      │ functional-169872 │ jenkins │ v1.37.0 │ 15 Nov 25 09:50 UTC │ 15 Nov 25 09:50 UTC │
	│ image          │ functional-169872 image load --daemon kicbase/echo-server:functional-169872 --alsologtostderr                                                                   │ functional-169872 │ jenkins │ v1.37.0 │ 15 Nov 25 09:50 UTC │ 15 Nov 25 09:50 UTC │
	│ image          │ functional-169872 image ls                                                                                                                                      │ functional-169872 │ jenkins │ v1.37.0 │ 15 Nov 25 09:50 UTC │ 15 Nov 25 09:50 UTC │
	│ image          │ functional-169872 image save kicbase/echo-server:functional-169872 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-169872 │ jenkins │ v1.37.0 │ 15 Nov 25 09:50 UTC │ 15 Nov 25 09:50 UTC │
	│ image          │ functional-169872 image rm kicbase/echo-server:functional-169872 --alsologtostderr                                                                              │ functional-169872 │ jenkins │ v1.37.0 │ 15 Nov 25 09:50 UTC │ 15 Nov 25 09:50 UTC │
	│ image          │ functional-169872 image ls                                                                                                                                      │ functional-169872 │ jenkins │ v1.37.0 │ 15 Nov 25 09:50 UTC │ 15 Nov 25 09:51 UTC │
	│ image          │ functional-169872 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-169872 │ jenkins │ v1.37.0 │ 15 Nov 25 09:51 UTC │ 15 Nov 25 09:51 UTC │
	│ image          │ functional-169872 image save --daemon kicbase/echo-server:functional-169872 --alsologtostderr                                                                   │ functional-169872 │ jenkins │ v1.37.0 │ 15 Nov 25 09:51 UTC │ 15 Nov 25 09:51 UTC │
	│ license        │                                                                                                                                                                 │ minikube          │ jenkins │ v1.37.0 │ 15 Nov 25 09:51 UTC │ 15 Nov 25 09:51 UTC │
	│ ssh            │ functional-169872 ssh sudo systemctl is-active docker                                                                                                           │ functional-169872 │ jenkins │ v1.37.0 │ 15 Nov 25 09:51 UTC │                     │
	│ ssh            │ functional-169872 ssh sudo systemctl is-active containerd                                                                                                       │ functional-169872 │ jenkins │ v1.37.0 │ 15 Nov 25 09:51 UTC │                     │
	│ image          │ functional-169872 image ls --format short --alsologtostderr                                                                                                     │ functional-169872 │ jenkins │ v1.37.0 │ 15 Nov 25 09:51 UTC │ 15 Nov 25 09:51 UTC │
	│ image          │ functional-169872 image ls --format json --alsologtostderr                                                                                                      │ functional-169872 │ jenkins │ v1.37.0 │ 15 Nov 25 09:51 UTC │ 15 Nov 25 09:51 UTC │
	│ image          │ functional-169872 image ls --format table --alsologtostderr                                                                                                     │ functional-169872 │ jenkins │ v1.37.0 │ 15 Nov 25 09:51 UTC │ 15 Nov 25 09:51 UTC │
	│ image          │ functional-169872 image ls --format yaml --alsologtostderr                                                                                                      │ functional-169872 │ jenkins │ v1.37.0 │ 15 Nov 25 09:51 UTC │ 15 Nov 25 09:51 UTC │
	│ ssh            │ functional-169872 ssh pgrep buildkitd                                                                                                                           │ functional-169872 │ jenkins │ v1.37.0 │ 15 Nov 25 09:51 UTC │                     │
	│ image          │ functional-169872 image build -t localhost/my-image:functional-169872 testdata/build --alsologtostderr                                                          │ functional-169872 │ jenkins │ v1.37.0 │ 15 Nov 25 09:51 UTC │ 15 Nov 25 09:51 UTC │
	│ update-context │ functional-169872 update-context --alsologtostderr -v=2                                                                                                         │ functional-169872 │ jenkins │ v1.37.0 │ 15 Nov 25 09:51 UTC │ 15 Nov 25 09:51 UTC │
	│ update-context │ functional-169872 update-context --alsologtostderr -v=2                                                                                                         │ functional-169872 │ jenkins │ v1.37.0 │ 15 Nov 25 09:51 UTC │ 15 Nov 25 09:51 UTC │
	│ update-context │ functional-169872 update-context --alsologtostderr -v=2                                                                                                         │ functional-169872 │ jenkins │ v1.37.0 │ 15 Nov 25 09:51 UTC │ 15 Nov 25 09:51 UTC │
	│ image          │ functional-169872 image ls                                                                                                                                      │ functional-169872 │ jenkins │ v1.37.0 │ 15 Nov 25 09:51 UTC │ 15 Nov 25 09:51 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 09:50:48
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 09:50:48.904644   97349 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:50:48.904880   97349 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:50:48.904889   97349 out.go:374] Setting ErrFile to fd 2...
	I1115 09:50:48.904893   97349 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:50:48.905108   97349 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 09:50:48.905521   97349 out.go:368] Setting JSON to false
	I1115 09:50:48.906486   97349 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":5586,"bootTime":1763194663,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 09:50:48.906577   97349 start.go:143] virtualization: kvm guest
	I1115 09:50:48.908336   97349 out.go:179] * [functional-169872] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 09:50:48.909620   97349 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 09:50:48.909600   97349 notify.go:221] Checking for updates...
	I1115 09:50:48.910946   97349 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:50:48.912005   97349 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 09:50:48.913194   97349 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-55448/.minikube
	I1115 09:50:48.914431   97349 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 09:50:48.915581   97349 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 09:50:48.917319   97349 config.go:182] Loaded profile config "functional-169872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:50:48.918011   97349 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:50:48.941927   97349 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 09:50:48.942048   97349 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:50:48.998265   97349 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:56 SystemTime:2025-11-15 09:50:48.987581921 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:50:48.998388   97349 docker.go:319] overlay module found
	I1115 09:50:49.000947   97349 out.go:179] * Using the docker driver based on existing profile
	I1115 09:50:49.002046   97349 start.go:309] selected driver: docker
	I1115 09:50:49.002060   97349 start.go:930] validating driver "docker" against &{Name:functional-169872 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-169872 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:50:49.002146   97349 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 09:50:49.002236   97349 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:50:49.059106   97349 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:56 SystemTime:2025-11-15 09:50:49.049576411 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:50:49.059759   97349 cni.go:84] Creating CNI manager for ""
	I1115 09:50:49.059825   97349 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 09:50:49.059872   97349 start.go:353] cluster config:
	{Name:functional-169872 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-169872 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:50:49.061529   97349 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Nov 15 09:50:58 functional-169872 crio[4142]: time="2025-11-15T09:50:58.516577098Z" level=info msg="Created container 7711074d8d54d282771b76d1d4582f2310fa3ae2947286e16ddaf1ef3530af5c: kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-djlrp/dashboard-metrics-scraper" id=d4a8cfc4-7bb9-408e-a22e-1a45f26396ef name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 09:50:58 functional-169872 crio[4142]: time="2025-11-15T09:50:58.51727397Z" level=info msg="Starting container: 7711074d8d54d282771b76d1d4582f2310fa3ae2947286e16ddaf1ef3530af5c" id=c5f0dff2-c0af-49cf-8fc2-4b0a59e13746 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 09:50:58 functional-169872 crio[4142]: time="2025-11-15T09:50:58.518999507Z" level=info msg="Started container" PID=7761 containerID=7711074d8d54d282771b76d1d4582f2310fa3ae2947286e16ddaf1ef3530af5c description=kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-djlrp/dashboard-metrics-scraper id=c5f0dff2-c0af-49cf-8fc2-4b0a59e13746 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8e81a72d408303ac1c21df804890bb643d48864942ec2b90b9f7e464e83ae7c9
	Nov 15 09:50:58 functional-169872 crio[4142]: time="2025-11-15T09:50:58.979044631Z" level=info msg="Checking image status: kicbase/echo-server:functional-169872" id=e1e9c0c8-3788-453a-9066-7d0dd0c0a170 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:50:59 functional-169872 crio[4142]: time="2025-11-15T09:50:59.004828179Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-169872" id=bbe4708a-e8d1-446b-97e3-489d6204d3bc name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:50:59 functional-169872 crio[4142]: time="2025-11-15T09:50:59.004984212Z" level=info msg="Image docker.io/kicbase/echo-server:functional-169872 not found" id=bbe4708a-e8d1-446b-97e3-489d6204d3bc name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:50:59 functional-169872 crio[4142]: time="2025-11-15T09:50:59.005022222Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-169872 found" id=bbe4708a-e8d1-446b-97e3-489d6204d3bc name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:50:59 functional-169872 crio[4142]: time="2025-11-15T09:50:59.029028692Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-169872" id=0b13332e-ea30-4e53-8437-bbdcc483e802 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:50:59 functional-169872 crio[4142]: time="2025-11-15T09:50:59.029239601Z" level=info msg="Image localhost/kicbase/echo-server:functional-169872 not found" id=0b13332e-ea30-4e53-8437-bbdcc483e802 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:50:59 functional-169872 crio[4142]: time="2025-11-15T09:50:59.029294406Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-169872 found" id=0b13332e-ea30-4e53-8437-bbdcc483e802 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:50:59 functional-169872 crio[4142]: time="2025-11-15T09:50:59.761840039Z" level=info msg="Checking image status: kicbase/echo-server:functional-169872" id=fa2a6c4b-4589-4721-8097-00b73162a92d name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:50:59 functional-169872 crio[4142]: time="2025-11-15T09:50:59.78580452Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-169872" id=88b80d33-27e8-4cf3-8521-bd700e889085 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:50:59 functional-169872 crio[4142]: time="2025-11-15T09:50:59.785946203Z" level=info msg="Image docker.io/kicbase/echo-server:functional-169872 not found" id=88b80d33-27e8-4cf3-8521-bd700e889085 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:50:59 functional-169872 crio[4142]: time="2025-11-15T09:50:59.786020082Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-169872 found" id=88b80d33-27e8-4cf3-8521-bd700e889085 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:50:59 functional-169872 crio[4142]: time="2025-11-15T09:50:59.809454313Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-169872" id=e74da00b-1065-4953-9fd4-9a1ceb48b837 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:50:59 functional-169872 crio[4142]: time="2025-11-15T09:50:59.809569291Z" level=info msg="Image localhost/kicbase/echo-server:functional-169872 not found" id=e74da00b-1065-4953-9fd4-9a1ceb48b837 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:50:59 functional-169872 crio[4142]: time="2025-11-15T09:50:59.809601326Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-169872 found" id=e74da00b-1065-4953-9fd4-9a1ceb48b837 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:51:06 functional-169872 crio[4142]: time="2025-11-15T09:51:06.687326124Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=8e511b78-d533-4a21-a4a0-edf5eb0464e3 name=/runtime.v1.ImageService/PullImage
	Nov 15 09:51:12 functional-169872 crio[4142]: time="2025-11-15T09:51:12.687908812Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=025076ea-3982-46bc-8188-3ffc6572d7c3 name=/runtime.v1.ImageService/PullImage
	Nov 15 09:51:55 functional-169872 crio[4142]: time="2025-11-15T09:51:55.68786737Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f059d775-d679-49a0-b779-1915d976d420 name=/runtime.v1.ImageService/PullImage
	Nov 15 09:51:56 functional-169872 crio[4142]: time="2025-11-15T09:51:56.687564804Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f888314c-5edb-48ba-a7c6-2d0e24c9fa01 name=/runtime.v1.ImageService/PullImage
	Nov 15 09:53:26 functional-169872 crio[4142]: time="2025-11-15T09:53:26.687999743Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f695e65a-c5ac-46f9-b48b-3854aac3ea63 name=/runtime.v1.ImageService/PullImage
	Nov 15 09:53:27 functional-169872 crio[4142]: time="2025-11-15T09:53:27.687576342Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=af82b074-e722-4d41-bda7-bd6e820d4963 name=/runtime.v1.ImageService/PullImage
	Nov 15 09:56:08 functional-169872 crio[4142]: time="2025-11-15T09:56:08.687176273Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=2f93700b-d311-4768-8966-a51e2bf9066a name=/runtime.v1.ImageService/PullImage
	Nov 15 09:56:18 functional-169872 crio[4142]: time="2025-11-15T09:56:18.68812469Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=6706826d-3e42-4beb-837b-85b09b0d5c87 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	7711074d8d54d       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   8e81a72d40830       dashboard-metrics-scraper-77bf4d6c4c-djlrp   kubernetes-dashboard
	e226d89a775c7       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         9 minutes ago       Running             kubernetes-dashboard        0                   31cb26625cdb8       kubernetes-dashboard-855c9754f9-wcn5x        kubernetes-dashboard
	fccdd6d0c3a2d       docker.io/library/nginx@sha256:bd1578eec775d0b28fd7f664b182b7e1fb75f1dd09f92d865dababe8525dfe8b                  9 minutes ago       Running             myfrontend                  0                   f33649f57b99a       sp-pod                                       default
	b8697bb012d28       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              9 minutes ago       Exited              mount-munger                0                   873cbff1c1f4c       busybox-mount                                default
	8af41e6db4d21       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                  9 minutes ago       Running             nginx                       0                   ce460d4bd8b01       nginx-svc                                    default
	405506d722723       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  9 minutes ago       Running             mysql                       0                   8810dd127d7ea       mysql-5bb876957f-thgfq                       default
	2dcc6920d238a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 10 minutes ago      Running             coredns                     2                   d3e280ab2177b       coredns-66bc5c9577-dskx4                     kube-system
	2342997459c96       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 10 minutes ago      Running             kindnet-cni                 2                   3400b94ca4026       kindnet-wbdjc                                kube-system
	e817509901e85       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 10 minutes ago      Running             kube-proxy                  2                   910f00481b7fe       kube-proxy-hj6rc                             kube-system
	f53fd0e5f0bad       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         3                   8c212c3d66d0f       storage-provisioner                          kube-system
	b8cfa8af4b117       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Running             kube-apiserver              0                   7471d50467358       kube-apiserver-functional-169872             kube-system
	710828d59c4bd       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        2                   4d6317c395ff0       etcd-functional-169872                       kube-system
	1e3bca9d9c1fa       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Running             kube-controller-manager     2                   570c7fa408bf5       kube-controller-manager-functional-169872    kube-system
	d1a2fe4ba51fa       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 10 minutes ago      Running             kube-scheduler              2                   db8bec0fe38a5       kube-scheduler-functional-169872             kube-system
	8661449ae43ae       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Exited              storage-provisioner         2                   8c212c3d66d0f       storage-provisioner                          kube-system
	24de4ec4b6857       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 11 minutes ago      Exited              kube-controller-manager     1                   570c7fa408bf5       kube-controller-manager-functional-169872    kube-system
	fcc948653104e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     1                   d3e280ab2177b       coredns-66bc5c9577-dskx4                     kube-system
	6baea6a6b9d0e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 11 minutes ago      Exited              kube-scheduler              1                   db8bec0fe38a5       kube-scheduler-functional-169872             kube-system
	d34331eb04f01       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 11 minutes ago      Exited              etcd                        1                   4d6317c395ff0       etcd-functional-169872                       kube-system
	14ba9cc72c6b1       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Exited              kube-proxy                  1                   910f00481b7fe       kube-proxy-hj6rc                             kube-system
	9fb4adeb9c2a1       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Exited              kindnet-cni                 1                   3400b94ca4026       kindnet-wbdjc                                kube-system
	
	
	==> coredns [2dcc6920d238a254d625c77148392a250614b129a6ca3e88ce92504e400e44ee] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36316 - 2922 "HINFO IN 6475787914197598610.1311088614678499533. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.015201837s
	
	
	==> coredns [fcc948653104e02905b1a5e6da22affeeec1e7c716ca4a6bf6f8fba571443a75] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45607 - 4299 "HINFO IN 906657146889409340.8857473442034769942. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.017652486s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-169872
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-169872
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=functional-169872
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T09_48_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 09:48:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-169872
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:00:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:00:04 +0000   Sat, 15 Nov 2025 09:48:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:00:04 +0000   Sat, 15 Nov 2025 09:48:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:00:04 +0000   Sat, 15 Nov 2025 09:48:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:00:04 +0000   Sat, 15 Nov 2025 09:48:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-169872
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                e79c527e-5082-4469-8240-e295711c5105
	  Boot ID:                    6a33f504-3a8c-4d49-8fc5-301cf5275efc
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-nrvb4                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m43s
	  default                     hello-node-connect-7d85dfc575-5rfjv           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-thgfq                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m34s
	  kube-system                 coredns-66bc5c9577-dskx4                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-169872                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-wbdjc                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-169872              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-169872     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-hj6rc                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-169872              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-djlrp    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m27s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-wcn5x         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-169872 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-169872 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-169872 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-169872 event: Registered Node functional-169872 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-169872 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-169872 event: Registered Node functional-169872 in Controller
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-169872 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-169872 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-169872 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-169872 event: Registered Node functional-169872 in Controller
	
	
	==> dmesg <==
	[  +0.023932] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.604079] kauditd_printk_skb: 47 callbacks suppressed
	[Nov15 09:41] kmem.limit_in_bytes is deprecated and will be removed. Writing any value to this file has no effect. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov15 09:44] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[  +1.059558] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[  +1.023907] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[  +1.023868] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[  +1.023912] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[  +1.023925] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[  +2.047814] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[  +4.031639] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[  +8.127259] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[Nov15 09:45] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[ +32.253211] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	
	
	==> etcd [710828d59c4bd34738c41d8995ebb224b022bb90e1b75330bb9ed4635a2f536f] <==
	{"level":"warn","ts":"2025-11-15T09:49:51.808008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:49:51.814074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:49:51.820020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:49:51.826156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:49:51.888067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:49:51.894943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:49:51.901335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:49:51.911064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:49:51.917020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:49:51.923898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:49:51.930986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:49:51.938059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:49:51.945479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:49:51.980910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:49:51.987399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:49:51.994285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:49:52.000482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:49:52.006589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:49:52.013647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:49:52.033677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:49:52.047652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:49:52.129831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57170","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-15T09:59:51.210792Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1209}
	{"level":"info","ts":"2025-11-15T09:59:51.231454Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1209,"took":"20.207281ms","hash":1548405123,"current-db-size-bytes":3706880,"current-db-size":"3.7 MB","current-db-size-in-use-bytes":1777664,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-11-15T09:59:51.231516Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1548405123,"revision":1209,"compact-revision":-1}
	
	
	==> etcd [d34331eb04f012ac4fde057303ea2ae4446307517a70e74fe54eeefd98cee1ad] <==
	{"level":"warn","ts":"2025-11-15T09:49:13.494656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:49:13.501688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:49:13.507736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:49:13.520376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:49:13.526725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:49:13.533154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:49:13.586791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54638","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-15T09:49:40.591107Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-15T09:49:40.591304Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-169872","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-15T09:49:40.591419Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-15T09:49:40.661650Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-15T09:49:40.661740Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T09:49:40.661762Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-11-15T09:49:40.661823Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-15T09:49:40.661821Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-11-15T09:49:40.661870Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-15T09:49:40.661866Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-15T09:49:40.661879Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-15T09:49:40.661889Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T09:49:40.661865Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"error","ts":"2025-11-15T09:49:40.661892Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T09:49:40.664093Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-15T09:49:40.664150Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T09:49:40.664178Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-15T09:49:40.664184Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-169872","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 10:00:17 up  1:42,  0 user,  load average: 0.10, 0.30, 0.94
	Linux functional-169872 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2342997459c961ad0a4c361a9763cc92e9341f5f1e9b0aa24da9bc1b23df2f51] <==
	I1115 09:58:14.423987       1 main.go:301] handling current node
	I1115 09:58:24.425750       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:58:24.425796       1 main.go:301] handling current node
	I1115 09:58:34.424413       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:58:34.424454       1 main.go:301] handling current node
	I1115 09:58:44.425502       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:58:44.425544       1 main.go:301] handling current node
	I1115 09:58:54.428542       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:58:54.428587       1 main.go:301] handling current node
	I1115 09:59:04.425234       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:59:04.425278       1 main.go:301] handling current node
	I1115 09:59:14.425041       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:59:14.425073       1 main.go:301] handling current node
	I1115 09:59:24.423937       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:59:24.424020       1 main.go:301] handling current node
	I1115 09:59:34.425413       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:59:34.425456       1 main.go:301] handling current node
	I1115 09:59:44.424817       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:59:44.424863       1 main.go:301] handling current node
	I1115 09:59:54.424115       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:59:54.424153       1 main.go:301] handling current node
	I1115 10:00:04.424610       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 10:00:04.424668       1 main.go:301] handling current node
	I1115 10:00:14.424362       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 10:00:14.424396       1 main.go:301] handling current node
	
	
	==> kindnet [9fb4adeb9c2a13207ccbbe7e964ff7cdf48dacf5c33d14c7c9d315e1fb52d73b] <==
	I1115 09:49:11.287472       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 09:49:11.377687       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1115 09:49:11.377999       1 main.go:148] setting mtu 1500 for CNI 
	I1115 09:49:11.378070       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 09:49:11.378119       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T09:49:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 09:49:11.583729       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 09:49:11.661411       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 09:49:11.661477       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 09:49:11.661668       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1115 09:49:14.192519       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1115 09:49:14.192704       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"networkpolicies\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1115 09:49:14.192800       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1115 09:49:14.192601       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1115 09:49:15.861752       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 09:49:15.861788       1 metrics.go:72] Registering metrics
	I1115 09:49:15.861895       1 controller.go:711] "Syncing nftables rules"
	I1115 09:49:21.586646       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:49:21.586682       1 main.go:301] handling current node
	I1115 09:49:31.583315       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:49:31.583348       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b8cfa8af4b117bdd4aec0d75b36f76576b3f20c99b247b84900f018bdd2f742c] <==
	I1115 09:49:52.994260       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1115 09:49:52.999329       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 09:49:53.797576       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 09:49:53.815894       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 09:49:54.659581       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1115 09:49:54.753463       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 09:49:54.814326       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 09:49:54.821706       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 09:49:56.287616       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 09:49:56.586576       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 09:49:56.638533       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 09:50:09.354153       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.111.120.180"}
	I1115 09:50:13.352982       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.111.92.133"}
	I1115 09:50:15.063323       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.101.218.122"}
	I1115 09:50:15.712102       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.96.198.194"}
	E1115 09:50:31.686088       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:56878: use of closed network connection
	E1115 09:50:32.504317       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:56896: use of closed network connection
	E1115 09:50:34.396182       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:56912: use of closed network connection
	I1115 09:50:34.398284       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.100.63.127"}
	E1115 09:50:42.089148       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:35380: use of closed network connection
	I1115 09:50:49.949165       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 09:50:50.107259       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.248.248"}
	I1115 09:50:50.182371       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.117.54"}
	E1115 09:50:52.324112       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:55126: use of closed network connection
	I1115 09:59:52.900033       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [1e3bca9d9c1fa6b18d791ee94615518d59faf974254c1efbae45161a94b6f1a2] <==
	I1115 09:49:56.233643       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 09:49:56.233736       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 09:49:56.233766       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 09:49:56.233781       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1115 09:49:56.233797       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1115 09:49:56.233786       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1115 09:49:56.233893       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1115 09:49:56.234408       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1115 09:49:56.235207       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1115 09:49:56.235232       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1115 09:49:56.235384       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 09:49:56.237195       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 09:49:56.237217       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 09:49:56.240594       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1115 09:49:56.240659       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1115 09:49:56.240712       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-169872"
	I1115 09:49:56.240745       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1115 09:49:56.242899       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1115 09:49:56.254591       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1115 09:50:50.008753       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1115 09:50:50.012035       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1115 09:50:50.012718       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1115 09:50:50.015553       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1115 09:50:50.017105       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1115 09:50:50.021234       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [24de4ec4b6857ae1a219ea353b900f3b1eebcb66539b662eec3fcdd8a09568d1] <==
	I1115 09:49:17.452768       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1115 09:49:17.453239       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1115 09:49:17.453262       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1115 09:49:17.453985       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1115 09:49:17.456273       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 09:49:17.458170       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 09:49:17.460350       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1115 09:49:17.460400       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 09:49:17.469664       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1115 09:49:17.471909       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1115 09:49:17.473062       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1115 09:49:17.475322       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1115 09:49:17.502757       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 09:49:17.502794       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 09:49:17.502832       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1115 09:49:17.502836       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1115 09:49:17.502865       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1115 09:49:17.502889       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1115 09:49:17.502912       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 09:49:17.502917       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 09:49:17.503082       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1115 09:49:17.503095       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1115 09:49:17.508050       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1115 09:49:17.508282       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 09:49:17.523684       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [14ba9cc72c6b11c5e0bcf2cb4e07ada0fbabbed26b14474bbfe147667245f2f3] <==
	I1115 09:49:11.302230       1 server_linux.go:53] "Using iptables proxy"
	I1115 09:49:11.502849       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 09:49:14.203611       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 09:49:14.203647       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1115 09:49:14.203707       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 09:49:14.389498       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 09:49:14.389557       1 server_linux.go:132] "Using iptables Proxier"
	I1115 09:49:14.396827       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 09:49:14.397166       1 server.go:527] "Version info" version="v1.34.1"
	I1115 09:49:14.397190       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 09:49:14.398798       1 config.go:200] "Starting service config controller"
	I1115 09:49:14.398831       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 09:49:14.399278       1 config.go:309] "Starting node config controller"
	I1115 09:49:14.399335       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 09:49:14.399463       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 09:49:14.400517       1 config.go:106] "Starting endpoint slice config controller"
	I1115 09:49:14.404841       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 09:49:14.400679       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 09:49:14.404965       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 09:49:14.499420       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 09:49:14.505668       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 09:49:14.505703       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [e817509901e85af8f9bf402c0a670d7e45ca14480431eb3ab836dfc7e62fd851] <==
	I1115 09:49:54.188221       1 server_linux.go:53] "Using iptables proxy"
	I1115 09:49:54.250761       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 09:49:54.350978       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 09:49:54.351021       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1115 09:49:54.351145       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 09:49:54.371621       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 09:49:54.371685       1 server_linux.go:132] "Using iptables Proxier"
	I1115 09:49:54.377466       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 09:49:54.377911       1 server.go:527] "Version info" version="v1.34.1"
	I1115 09:49:54.377965       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 09:49:54.379587       1 config.go:200] "Starting service config controller"
	I1115 09:49:54.379604       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 09:49:54.379763       1 config.go:309] "Starting node config controller"
	I1115 09:49:54.379805       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 09:49:54.379815       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 09:49:54.379889       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 09:49:54.379927       1 config.go:106] "Starting endpoint slice config controller"
	I1115 09:49:54.379979       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 09:49:54.379944       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 09:49:54.480436       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 09:49:54.480494       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 09:49:54.480494       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [6baea6a6b9d0e4c1a158b7fc9ce68628965f185a9c47b612943e13ca9d8fd973] <==
	I1115 09:49:12.077968       1 serving.go:386] Generated self-signed cert in-memory
	W1115 09:49:14.177253       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1115 09:49:14.179119       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1115 09:49:14.179150       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1115 09:49:14.179161       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1115 09:49:14.279474       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1115 09:49:14.279600       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 09:49:14.283244       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 09:49:14.283749       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 09:49:14.285749       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 09:49:14.284825       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 09:49:14.386850       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 09:49:40.596274       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1115 09:49:40.596728       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1115 09:49:40.597063       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1115 09:49:40.596873       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 09:49:40.597102       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1115 09:49:40.597233       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d1a2fe4ba51fa78640834df56499c84ebb97be84dcb15e0c25d6273a2f21f6a7] <==
	I1115 09:49:50.918480       1 serving.go:386] Generated self-signed cert in-memory
	W1115 09:49:52.890342       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1115 09:49:52.890462       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	W1115 09:49:52.890500       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1115 09:49:52.890530       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1115 09:49:52.989710       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1115 09:49:52.989800       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 09:49:52.993480       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 09:49:52.993561       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 09:49:52.994740       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 09:49:52.994849       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 09:49:53.094469       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 09:57:42 functional-169872 kubelet[4526]: E1115 09:57:42.687153    4526 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-5rfjv" podUID="63cde62e-14e4-46e5-9865-f48742429a74"
	Nov 15 09:57:53 functional-169872 kubelet[4526]: E1115 09:57:53.687249    4526 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-5rfjv" podUID="63cde62e-14e4-46e5-9865-f48742429a74"
	Nov 15 09:57:54 functional-169872 kubelet[4526]: E1115 09:57:54.687765    4526 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-nrvb4" podUID="e3a39c2d-2b6d-4527-b9cf-e1568ddcf995"
	Nov 15 09:58:05 functional-169872 kubelet[4526]: E1115 09:58:05.687868    4526 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-nrvb4" podUID="e3a39c2d-2b6d-4527-b9cf-e1568ddcf995"
	Nov 15 09:58:06 functional-169872 kubelet[4526]: E1115 09:58:06.687042    4526 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-5rfjv" podUID="63cde62e-14e4-46e5-9865-f48742429a74"
	Nov 15 09:58:16 functional-169872 kubelet[4526]: E1115 09:58:16.687495    4526 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-nrvb4" podUID="e3a39c2d-2b6d-4527-b9cf-e1568ddcf995"
	Nov 15 09:58:17 functional-169872 kubelet[4526]: E1115 09:58:17.686932    4526 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-5rfjv" podUID="63cde62e-14e4-46e5-9865-f48742429a74"
	Nov 15 09:58:27 functional-169872 kubelet[4526]: E1115 09:58:27.687541    4526 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-nrvb4" podUID="e3a39c2d-2b6d-4527-b9cf-e1568ddcf995"
	Nov 15 09:58:32 functional-169872 kubelet[4526]: E1115 09:58:32.687275    4526 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-5rfjv" podUID="63cde62e-14e4-46e5-9865-f48742429a74"
	Nov 15 09:58:41 functional-169872 kubelet[4526]: E1115 09:58:41.687267    4526 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-nrvb4" podUID="e3a39c2d-2b6d-4527-b9cf-e1568ddcf995"
	Nov 15 09:58:44 functional-169872 kubelet[4526]: E1115 09:58:44.687340    4526 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-5rfjv" podUID="63cde62e-14e4-46e5-9865-f48742429a74"
	Nov 15 09:58:55 functional-169872 kubelet[4526]: E1115 09:58:55.687827    4526 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-nrvb4" podUID="e3a39c2d-2b6d-4527-b9cf-e1568ddcf995"
	Nov 15 09:58:55 functional-169872 kubelet[4526]: E1115 09:58:55.687858    4526 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-5rfjv" podUID="63cde62e-14e4-46e5-9865-f48742429a74"
	Nov 15 09:59:07 functional-169872 kubelet[4526]: E1115 09:59:07.686828    4526 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-nrvb4" podUID="e3a39c2d-2b6d-4527-b9cf-e1568ddcf995"
	Nov 15 09:59:07 functional-169872 kubelet[4526]: E1115 09:59:07.686909    4526 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-5rfjv" podUID="63cde62e-14e4-46e5-9865-f48742429a74"
	Nov 15 09:59:18 functional-169872 kubelet[4526]: E1115 09:59:18.687732    4526 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-nrvb4" podUID="e3a39c2d-2b6d-4527-b9cf-e1568ddcf995"
	Nov 15 09:59:21 functional-169872 kubelet[4526]: E1115 09:59:21.686943    4526 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-5rfjv" podUID="63cde62e-14e4-46e5-9865-f48742429a74"
	Nov 15 09:59:32 functional-169872 kubelet[4526]: E1115 09:59:32.687723    4526 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-nrvb4" podUID="e3a39c2d-2b6d-4527-b9cf-e1568ddcf995"
	Nov 15 09:59:34 functional-169872 kubelet[4526]: E1115 09:59:34.687670    4526 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-5rfjv" podUID="63cde62e-14e4-46e5-9865-f48742429a74"
	Nov 15 09:59:43 functional-169872 kubelet[4526]: E1115 09:59:43.686926    4526 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-nrvb4" podUID="e3a39c2d-2b6d-4527-b9cf-e1568ddcf995"
	Nov 15 09:59:45 functional-169872 kubelet[4526]: E1115 09:59:45.687520    4526 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-5rfjv" podUID="63cde62e-14e4-46e5-9865-f48742429a74"
	Nov 15 09:59:56 functional-169872 kubelet[4526]: E1115 09:59:56.687085    4526 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-nrvb4" podUID="e3a39c2d-2b6d-4527-b9cf-e1568ddcf995"
	Nov 15 09:59:58 functional-169872 kubelet[4526]: E1115 09:59:58.687077    4526 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-5rfjv" podUID="63cde62e-14e4-46e5-9865-f48742429a74"
	Nov 15 10:00:07 functional-169872 kubelet[4526]: E1115 10:00:07.687137    4526 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-nrvb4" podUID="e3a39c2d-2b6d-4527-b9cf-e1568ddcf995"
	Nov 15 10:00:09 functional-169872 kubelet[4526]: E1115 10:00:09.688392    4526 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-5rfjv" podUID="63cde62e-14e4-46e5-9865-f48742429a74"
	
	
	==> kubernetes-dashboard [e226d89a775c73ef713d1a231674e0ca8f8e0016bb3f3afd866f30b7784392e6] <==
	2025/11/15 09:50:56 Starting overwatch
	2025/11/15 09:50:56 Using namespace: kubernetes-dashboard
	2025/11/15 09:50:56 Using in-cluster config to connect to apiserver
	2025/11/15 09:50:56 Using secret token for csrf signing
	2025/11/15 09:50:56 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/15 09:50:56 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/15 09:50:56 Successful initial request to the apiserver, version: v1.34.1
	2025/11/15 09:50:56 Generating JWE encryption key
	2025/11/15 09:50:56 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/15 09:50:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/15 09:50:56 Initializing JWE encryption key from synchronized object
	2025/11/15 09:50:56 Creating in-cluster Sidecar client
	2025/11/15 09:50:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 09:50:56 Serving insecurely on HTTP port: 9090
	2025/11/15 09:51:26 Successful request to sidecar
	
	
	==> storage-provisioner [8661449ae43ae7d64b0f28ad9acf6eee8843f95b9683e29ac34363bbea76cb24] <==
	I1115 09:49:24.832271       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 09:49:24.839633       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 09:49:24.839670       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1115 09:49:24.841825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:49:28.296480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:49:32.556946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:49:36.155062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:49:39.209055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f53fd0e5f0badc239d039e5344fc722fec54b6c2f8db53b61cdf6bb0eafdeb69] <==
	W1115 09:59:53.824426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:59:55.827413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:59:55.831062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:59:57.834638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:59:57.838467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:59:59.841253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:59:59.846150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:00:01.848831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:00:01.852773       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:00:03.855878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:00:03.860217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:00:05.863794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:00:05.867548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:00:07.870813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:00:07.874598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:00:09.878106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:00:09.882991       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:00:11.885968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:00:11.889697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:00:13.892715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:00:13.896420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:00:15.899657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:00:15.903403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:00:17.907484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:00:17.911381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-169872 -n functional-169872
helpers_test.go:269: (dbg) Run:  kubectl --context functional-169872 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-nrvb4 hello-node-connect-7d85dfc575-5rfjv
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-169872 describe pod busybox-mount hello-node-75c85bcc94-nrvb4 hello-node-connect-7d85dfc575-5rfjv
helpers_test.go:290: (dbg) kubectl --context functional-169872 describe pod busybox-mount hello-node-75c85bcc94-nrvb4 hello-node-connect-7d85dfc575-5rfjv:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-169872/192.168.49.2
	Start Time:       Sat, 15 Nov 2025 09:50:37 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://b8697bb012d2832adfa846b826f26c7f978cfff7f9542ddcd4acc39c82f753bd
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 15 Nov 2025 09:50:41 +0000
	      Finished:     Sat, 15 Nov 2025 09:50:41 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lcwnc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-lcwnc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m41s  default-scheduler  Successfully assigned default/busybox-mount to functional-169872
	  Normal  Pulling    9m41s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m37s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 4.048s (4.048s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m37s  kubelet            Created container: mount-munger
	  Normal  Started    9m37s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-nrvb4
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-169872/192.168.49.2
	Start Time:       Sat, 15 Nov 2025 09:50:34 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5p5wq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-5p5wq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m44s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-nrvb4 to functional-169872
	  Normal   Pulling    6m51s (x5 over 9m44s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m51s (x5 over 9m44s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m51s (x5 over 9m44s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m39s (x20 over 9m44s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m25s (x21 over 9m44s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-5rfjv
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-169872/192.168.49.2
	Start Time:       Sat, 15 Nov 2025 09:50:15 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sn8cm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-sn8cm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-5rfjv to functional-169872
	  Normal   Pulling    6m52s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m52s (x5 over 9m50s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m52s (x5 over 9m50s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m49s (x20 over 9m50s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m35s (x21 over 9m50s)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-169872 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-169872 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-nrvb4" [e3a39c2d-2b6d-4527-b9cf-e1568ddcf995] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-169872 -n functional-169872
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-15 10:00:34.727612731 +0000 UTC m=+1215.079243915
functional_test.go:1460: (dbg) Run:  kubectl --context functional-169872 describe po hello-node-75c85bcc94-nrvb4 -n default
functional_test.go:1460: (dbg) kubectl --context functional-169872 describe po hello-node-75c85bcc94-nrvb4 -n default:
Name:             hello-node-75c85bcc94-nrvb4
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-169872/192.168.49.2
Start Time:       Sat, 15 Nov 2025 09:50:34 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5p5wq (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-5p5wq:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-nrvb4 to functional-169872
Normal   Pulling    7m7s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m7s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m7s (x5 over 10m)    kubelet            Error: ErrImagePull
Warning  Failed     4m55s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m41s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-169872 logs hello-node-75c85bcc94-nrvb4 -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-169872 logs hello-node-75c85bcc94-nrvb4 -n default: exit status 1 (61.443798ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-nrvb4" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-169872 logs hello-node-75c85bcc94-nrvb4 -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 image load --daemon kicbase/echo-server:functional-169872 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-169872 image load --daemon kicbase/echo-server:functional-169872 --alsologtostderr: (1.024933972s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-169872" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 image load --daemon kicbase/echo-server:functional-169872 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-169872" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:250: (dbg) Done: docker pull kicbase/echo-server:latest: (1.142261235s)
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-169872
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 image load --daemon kicbase/echo-server:functional-169872 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-169872" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 image save kicbase/echo-server:functional-169872 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
2025/11/15 09:51:00 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1115 09:51:00.094984   98596 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:51:00.095164   98596 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:51:00.095175   98596 out.go:374] Setting ErrFile to fd 2...
	I1115 09:51:00.095179   98596 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:51:00.095390   98596 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 09:51:00.095928   98596 config.go:182] Loaded profile config "functional-169872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:51:00.096035   98596 config.go:182] Loaded profile config "functional-169872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:51:00.096409   98596 cli_runner.go:164] Run: docker container inspect functional-169872 --format={{.State.Status}}
	I1115 09:51:00.115160   98596 ssh_runner.go:195] Run: systemctl --version
	I1115 09:51:00.115223   98596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-169872
	I1115 09:51:00.134079   98596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/functional-169872/id_rsa Username:docker}
	I1115 09:51:00.229512   98596 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1115 09:51:00.229609   98596 cache_images.go:255] Failed to load cached images for "functional-169872": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1115 09:51:00.229640   98596 cache_images.go:267] failed pushing to: functional-169872

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-169872
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 image save --daemon kicbase/echo-server:functional-169872 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-169872
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-169872: exit status 1 (17.804807ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-169872

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-169872

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-169872 service --namespace=default --https --url hello-node: exit status 115 (551.692622ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31848
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-169872 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-169872 service hello-node --url --format={{.IP}}: exit status 115 (544.983246ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-169872 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-169872 service hello-node --url: exit status 115 (541.354438ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31848
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-169872 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31848
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.14s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-387139 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-387139 --output=json --user=testUser: exit status 80 (2.135222847s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"429bc97f-4d7a-4b15-aed4-4536f1b12868","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-387139 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"7cd4e20d-9f38-4827-bcee-1c1180dbd982","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-15T10:12:08Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"40d2a0dc-a455-4a26-9fc3-0d4f2ea58eb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-387139 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.14s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.44s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-387139 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-387139 --output=json --user=testUser: exit status 80 (1.434629997s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"86dfaa60-a8d7-4968-8441-54fd6524d800","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-387139 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"b7a64aeb-1307-4e94-8fad-9110ed142e49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-15T10:12:09Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"3f9ccc55-da77-4e5e-8ef4-1e6abf9b1fd6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-387139 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.44s)

                                                
                                    
x
+
TestPause/serial/Pause (7.38s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-642487 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-642487 --alsologtostderr -v=5: exit status 80 (2.537366999s)

                                                
                                                
-- stdout --
	* Pausing node pause-642487 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:27:56.867858  257730 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:27:56.868157  257730 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:27:56.868168  257730 out.go:374] Setting ErrFile to fd 2...
	I1115 10:27:56.868174  257730 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:27:56.868373  257730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 10:27:56.868601  257730 out.go:368] Setting JSON to false
	I1115 10:27:56.868656  257730 mustload.go:66] Loading cluster: pause-642487
	I1115 10:27:56.869037  257730 config.go:182] Loaded profile config "pause-642487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:27:56.869449  257730 cli_runner.go:164] Run: docker container inspect pause-642487 --format={{.State.Status}}
	I1115 10:27:56.890397  257730 host.go:66] Checking if "pause-642487" exists ...
	I1115 10:27:56.890741  257730 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:27:56.956256  257730 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:57 OomKillDisable:true NGoroutines:85 SystemTime:2025-11-15 10:27:56.946674437 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:27:56.956994  257730 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-642487 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1115 10:27:56.959438  257730 out.go:179] * Pausing node pause-642487 ... 
	I1115 10:27:56.960467  257730 host.go:66] Checking if "pause-642487" exists ...
	I1115 10:27:56.960698  257730 ssh_runner.go:195] Run: systemctl --version
	I1115 10:27:56.960735  257730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-642487
	I1115 10:27:56.977191  257730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32974 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/pause-642487/id_rsa Username:docker}
	I1115 10:27:57.078798  257730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:27:57.092281  257730 pause.go:52] kubelet running: true
	I1115 10:27:57.092335  257730 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:27:57.238249  257730 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:27:57.238349  257730 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:27:57.313943  257730 cri.go:89] found id: "a5225c7077c6f279cc61d6fc15cdb423b982b29412dd8a173b4bdc1c976af30b"
	I1115 10:27:57.313985  257730 cri.go:89] found id: "79929c6a2bd5c66ee1f199a1d7d3bd6158f0914fb6ede8beb6a05b4c765efed2"
	I1115 10:27:57.313990  257730 cri.go:89] found id: "3d25753d2c98879b7a05bcb03f07c48e41dbace9c81399d0d21f0952ab2ca92b"
	I1115 10:27:57.313995  257730 cri.go:89] found id: "602dacd6f75382f88ebd4b70e2712c63d504c2104f09e69e7ed06953540d3bd6"
	I1115 10:27:57.314000  257730 cri.go:89] found id: "0ef58fd5a90ee7a6981b544c76026e5556c7883732036cbb3edd93733ccf4a6c"
	I1115 10:27:57.314005  257730 cri.go:89] found id: "ad476564fba4d88d6b14f082586a864bcf39dadd4f5371f877dda5662a573cf6"
	I1115 10:27:57.314010  257730 cri.go:89] found id: "fb8520106a10f5510cb789cf55fa41e2dcef15b4e49b1e8e695b835e5d20c27f"
	I1115 10:27:57.314014  257730 cri.go:89] found id: "9cc96cc9a3498c97b9e9f90a49f184e0c4c09a5789e8c22374b75eefad9909ec"
	I1115 10:27:57.314018  257730 cri.go:89] found id: "8bc8fa4817aeb775ecf14d85b0146af1da01b6d0d4276cb3d4f7730073d8be92"
	I1115 10:27:57.314024  257730 cri.go:89] found id: "7816357c42fe9c2750e08cc8de3eaa7ada90dbdcaa66ef57679625b7d186d0b6"
	I1115 10:27:57.314027  257730 cri.go:89] found id: "f6ded8d1e4b3632bdec1a6e539a1f29b3c4950da7cb62c97ca951715e1ec6888"
	I1115 10:27:57.314029  257730 cri.go:89] found id: "3bc6859c7b0047c536c2b2ef1f446ac2fc769d38f898f17f61d309bf77faba98"
	I1115 10:27:57.314033  257730 cri.go:89] found id: "212c7652d9dd7a0609b346e2da303aaeeb5bbe32e002bba5a7822c363e37bab1"
	I1115 10:27:57.314037  257730 cri.go:89] found id: "4e966f58ab607d5bbe97198f865699126b593ea8fce93b6ac7fa51dbf052e144"
	I1115 10:27:57.314041  257730 cri.go:89] found id: ""
	I1115 10:27:57.314088  257730 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:27:57.325732  257730 retry.go:31] will retry after 175.002286ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:27:57Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:27:57.501219  257730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:27:57.517653  257730 pause.go:52] kubelet running: false
	I1115 10:27:57.517712  257730 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:27:57.632870  257730 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:27:57.632948  257730 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:27:57.702869  257730 cri.go:89] found id: "a5225c7077c6f279cc61d6fc15cdb423b982b29412dd8a173b4bdc1c976af30b"
	I1115 10:27:57.702896  257730 cri.go:89] found id: "79929c6a2bd5c66ee1f199a1d7d3bd6158f0914fb6ede8beb6a05b4c765efed2"
	I1115 10:27:57.702902  257730 cri.go:89] found id: "3d25753d2c98879b7a05bcb03f07c48e41dbace9c81399d0d21f0952ab2ca92b"
	I1115 10:27:57.702908  257730 cri.go:89] found id: "602dacd6f75382f88ebd4b70e2712c63d504c2104f09e69e7ed06953540d3bd6"
	I1115 10:27:57.702912  257730 cri.go:89] found id: "0ef58fd5a90ee7a6981b544c76026e5556c7883732036cbb3edd93733ccf4a6c"
	I1115 10:27:57.702917  257730 cri.go:89] found id: "ad476564fba4d88d6b14f082586a864bcf39dadd4f5371f877dda5662a573cf6"
	I1115 10:27:57.702921  257730 cri.go:89] found id: "fb8520106a10f5510cb789cf55fa41e2dcef15b4e49b1e8e695b835e5d20c27f"
	I1115 10:27:57.702926  257730 cri.go:89] found id: "9cc96cc9a3498c97b9e9f90a49f184e0c4c09a5789e8c22374b75eefad9909ec"
	I1115 10:27:57.702930  257730 cri.go:89] found id: "8bc8fa4817aeb775ecf14d85b0146af1da01b6d0d4276cb3d4f7730073d8be92"
	I1115 10:27:57.702940  257730 cri.go:89] found id: "7816357c42fe9c2750e08cc8de3eaa7ada90dbdcaa66ef57679625b7d186d0b6"
	I1115 10:27:57.702945  257730 cri.go:89] found id: "f6ded8d1e4b3632bdec1a6e539a1f29b3c4950da7cb62c97ca951715e1ec6888"
	I1115 10:27:57.702949  257730 cri.go:89] found id: "3bc6859c7b0047c536c2b2ef1f446ac2fc769d38f898f17f61d309bf77faba98"
	I1115 10:27:57.702968  257730 cri.go:89] found id: "212c7652d9dd7a0609b346e2da303aaeeb5bbe32e002bba5a7822c363e37bab1"
	I1115 10:27:57.702972  257730 cri.go:89] found id: "4e966f58ab607d5bbe97198f865699126b593ea8fce93b6ac7fa51dbf052e144"
	I1115 10:27:57.702993  257730 cri.go:89] found id: ""
	I1115 10:27:57.703045  257730 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:27:57.716116  257730 retry.go:31] will retry after 249.986032ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:27:57Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:27:57.966645  257730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:27:57.982987  257730 pause.go:52] kubelet running: false
	I1115 10:27:57.983046  257730 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:27:58.125092  257730 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:27:58.125203  257730 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:27:58.206130  257730 cri.go:89] found id: "a5225c7077c6f279cc61d6fc15cdb423b982b29412dd8a173b4bdc1c976af30b"
	I1115 10:27:58.206162  257730 cri.go:89] found id: "79929c6a2bd5c66ee1f199a1d7d3bd6158f0914fb6ede8beb6a05b4c765efed2"
	I1115 10:27:58.206169  257730 cri.go:89] found id: "3d25753d2c98879b7a05bcb03f07c48e41dbace9c81399d0d21f0952ab2ca92b"
	I1115 10:27:58.206175  257730 cri.go:89] found id: "602dacd6f75382f88ebd4b70e2712c63d504c2104f09e69e7ed06953540d3bd6"
	I1115 10:27:58.206193  257730 cri.go:89] found id: "0ef58fd5a90ee7a6981b544c76026e5556c7883732036cbb3edd93733ccf4a6c"
	I1115 10:27:58.206199  257730 cri.go:89] found id: "ad476564fba4d88d6b14f082586a864bcf39dadd4f5371f877dda5662a573cf6"
	I1115 10:27:58.206204  257730 cri.go:89] found id: "fb8520106a10f5510cb789cf55fa41e2dcef15b4e49b1e8e695b835e5d20c27f"
	I1115 10:27:58.206209  257730 cri.go:89] found id: "9cc96cc9a3498c97b9e9f90a49f184e0c4c09a5789e8c22374b75eefad9909ec"
	I1115 10:27:58.206213  257730 cri.go:89] found id: "8bc8fa4817aeb775ecf14d85b0146af1da01b6d0d4276cb3d4f7730073d8be92"
	I1115 10:27:58.206227  257730 cri.go:89] found id: "7816357c42fe9c2750e08cc8de3eaa7ada90dbdcaa66ef57679625b7d186d0b6"
	I1115 10:27:58.206237  257730 cri.go:89] found id: "f6ded8d1e4b3632bdec1a6e539a1f29b3c4950da7cb62c97ca951715e1ec6888"
	I1115 10:27:58.206241  257730 cri.go:89] found id: "3bc6859c7b0047c536c2b2ef1f446ac2fc769d38f898f17f61d309bf77faba98"
	I1115 10:27:58.206245  257730 cri.go:89] found id: "212c7652d9dd7a0609b346e2da303aaeeb5bbe32e002bba5a7822c363e37bab1"
	I1115 10:27:58.206249  257730 cri.go:89] found id: "4e966f58ab607d5bbe97198f865699126b593ea8fce93b6ac7fa51dbf052e144"
	I1115 10:27:58.206254  257730 cri.go:89] found id: ""
	I1115 10:27:58.206296  257730 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:27:58.220627  257730 retry.go:31] will retry after 832.845073ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:27:58Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:27:59.054165  257730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:27:59.071863  257730 pause.go:52] kubelet running: false
	I1115 10:27:59.071970  257730 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:27:59.223826  257730 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:27:59.223920  257730 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:27:59.309032  257730 cri.go:89] found id: "a5225c7077c6f279cc61d6fc15cdb423b982b29412dd8a173b4bdc1c976af30b"
	I1115 10:27:59.309072  257730 cri.go:89] found id: "79929c6a2bd5c66ee1f199a1d7d3bd6158f0914fb6ede8beb6a05b4c765efed2"
	I1115 10:27:59.309080  257730 cri.go:89] found id: "3d25753d2c98879b7a05bcb03f07c48e41dbace9c81399d0d21f0952ab2ca92b"
	I1115 10:27:59.309085  257730 cri.go:89] found id: "602dacd6f75382f88ebd4b70e2712c63d504c2104f09e69e7ed06953540d3bd6"
	I1115 10:27:59.309089  257730 cri.go:89] found id: "0ef58fd5a90ee7a6981b544c76026e5556c7883732036cbb3edd93733ccf4a6c"
	I1115 10:27:59.309094  257730 cri.go:89] found id: "ad476564fba4d88d6b14f082586a864bcf39dadd4f5371f877dda5662a573cf6"
	I1115 10:27:59.309098  257730 cri.go:89] found id: "fb8520106a10f5510cb789cf55fa41e2dcef15b4e49b1e8e695b835e5d20c27f"
	I1115 10:27:59.309101  257730 cri.go:89] found id: "9cc96cc9a3498c97b9e9f90a49f184e0c4c09a5789e8c22374b75eefad9909ec"
	I1115 10:27:59.309105  257730 cri.go:89] found id: "8bc8fa4817aeb775ecf14d85b0146af1da01b6d0d4276cb3d4f7730073d8be92"
	I1115 10:27:59.309145  257730 cri.go:89] found id: "7816357c42fe9c2750e08cc8de3eaa7ada90dbdcaa66ef57679625b7d186d0b6"
	I1115 10:27:59.309154  257730 cri.go:89] found id: "f6ded8d1e4b3632bdec1a6e539a1f29b3c4950da7cb62c97ca951715e1ec6888"
	I1115 10:27:59.309158  257730 cri.go:89] found id: "3bc6859c7b0047c536c2b2ef1f446ac2fc769d38f898f17f61d309bf77faba98"
	I1115 10:27:59.309162  257730 cri.go:89] found id: "212c7652d9dd7a0609b346e2da303aaeeb5bbe32e002bba5a7822c363e37bab1"
	I1115 10:27:59.309166  257730 cri.go:89] found id: "4e966f58ab607d5bbe97198f865699126b593ea8fce93b6ac7fa51dbf052e144"
	I1115 10:27:59.309170  257730 cri.go:89] found id: ""
	I1115 10:27:59.309220  257730 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:27:59.325674  257730 out.go:203] 
	W1115 10:27:59.327043  257730 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:27:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:27:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 10:27:59.327064  257730 out.go:285] * 
	* 
	W1115 10:27:59.333301  257730 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 10:27:59.334463  257730 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-642487 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-642487
helpers_test.go:243: (dbg) docker inspect pause-642487:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "edc8640ba52ff151e06a6fb32b36e92157bdd4731a041218ada6f1b739d56fba",
	        "Created": "2025-11-15T10:26:21.921151466Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 236220,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:26:21.969161358Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/edc8640ba52ff151e06a6fb32b36e92157bdd4731a041218ada6f1b739d56fba/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/edc8640ba52ff151e06a6fb32b36e92157bdd4731a041218ada6f1b739d56fba/hostname",
	        "HostsPath": "/var/lib/docker/containers/edc8640ba52ff151e06a6fb32b36e92157bdd4731a041218ada6f1b739d56fba/hosts",
	        "LogPath": "/var/lib/docker/containers/edc8640ba52ff151e06a6fb32b36e92157bdd4731a041218ada6f1b739d56fba/edc8640ba52ff151e06a6fb32b36e92157bdd4731a041218ada6f1b739d56fba-json.log",
	        "Name": "/pause-642487",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-642487:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-642487",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "edc8640ba52ff151e06a6fb32b36e92157bdd4731a041218ada6f1b739d56fba",
	                "LowerDir": "/var/lib/docker/overlay2/9859fbb95620510513364cc8ff34c8aecf3a8a86ed019d2be4d7c2fb2b4c14ce-init/diff:/var/lib/docker/overlay2/507c85b6fb43d9c98216fac79d7a8d08dd20b2a63d0fbfc758336e5d38c04044/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9859fbb95620510513364cc8ff34c8aecf3a8a86ed019d2be4d7c2fb2b4c14ce/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9859fbb95620510513364cc8ff34c8aecf3a8a86ed019d2be4d7c2fb2b4c14ce/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9859fbb95620510513364cc8ff34c8aecf3a8a86ed019d2be4d7c2fb2b4c14ce/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-642487",
	                "Source": "/var/lib/docker/volumes/pause-642487/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-642487",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-642487",
	                "name.minikube.sigs.k8s.io": "pause-642487",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c8237d2134376614d24fe570567f8c9bd2890ed695cad23e9198bfb5365f2fae",
	            "SandboxKey": "/var/run/docker/netns/c8237d213437",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32974"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32975"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32978"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32976"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32977"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-642487": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "45fa74c79adc443d7e7679f83553aa89d1028f6c56e4ba6acaf65b07e5eda1b8",
	                    "EndpointID": "70ea5df1a669f8f39338209acc6b047769e5478c78e5242adb2e8eb5d47b718e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "8a:1c:92:f8:8e:b8",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-642487",
	                        "edc8640ba52f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-642487 -n pause-642487
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-642487 -n pause-642487: exit status 2 (364.777809ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-642487 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-642487 logs -n 25: (1.443102386s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-931243 sudo systemctl cat cri-docker --no-pager                                                                                │ cilium-931243             │ jenkins │ v1.37.0 │ 15 Nov 25 10:26 UTC │                     │
	│ ssh     │ -p cilium-931243 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                           │ cilium-931243             │ jenkins │ v1.37.0 │ 15 Nov 25 10:26 UTC │                     │
	│ ssh     │ -p cilium-931243 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                     │ cilium-931243             │ jenkins │ v1.37.0 │ 15 Nov 25 10:26 UTC │                     │
	│ ssh     │ -p cilium-931243 sudo cri-dockerd --version                                                                                              │ cilium-931243             │ jenkins │ v1.37.0 │ 15 Nov 25 10:26 UTC │                     │
	│ ssh     │ -p cilium-931243 sudo systemctl status containerd --all --full --no-pager                                                                │ cilium-931243             │ jenkins │ v1.37.0 │ 15 Nov 25 10:26 UTC │                     │
	│ ssh     │ -p cilium-931243 sudo systemctl cat containerd --no-pager                                                                                │ cilium-931243             │ jenkins │ v1.37.0 │ 15 Nov 25 10:26 UTC │                     │
	│ ssh     │ -p cilium-931243 sudo cat /lib/systemd/system/containerd.service                                                                         │ cilium-931243             │ jenkins │ v1.37.0 │ 15 Nov 25 10:26 UTC │                     │
	│ ssh     │ -p cilium-931243 sudo cat /etc/containerd/config.toml                                                                                    │ cilium-931243             │ jenkins │ v1.37.0 │ 15 Nov 25 10:26 UTC │                     │
	│ ssh     │ -p cilium-931243 sudo containerd config dump                                                                                             │ cilium-931243             │ jenkins │ v1.37.0 │ 15 Nov 25 10:26 UTC │                     │
	│ ssh     │ -p cilium-931243 sudo systemctl status crio --all --full --no-pager                                                                      │ cilium-931243             │ jenkins │ v1.37.0 │ 15 Nov 25 10:26 UTC │                     │
	│ ssh     │ -p cilium-931243 sudo systemctl cat crio --no-pager                                                                                      │ cilium-931243             │ jenkins │ v1.37.0 │ 15 Nov 25 10:26 UTC │                     │
	│ ssh     │ -p cilium-931243 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                            │ cilium-931243             │ jenkins │ v1.37.0 │ 15 Nov 25 10:26 UTC │                     │
	│ ssh     │ -p cilium-931243 sudo crio config                                                                                                        │ cilium-931243             │ jenkins │ v1.37.0 │ 15 Nov 25 10:26 UTC │                     │
	│ delete  │ -p cilium-931243                                                                                                                         │ cilium-931243             │ jenkins │ v1.37.0 │ 15 Nov 25 10:26 UTC │ 15 Nov 25 10:26 UTC │
	│ start   │ -p stopped-upgrade-567029 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-567029    │ jenkins │ v1.32.0 │ 15 Nov 25 10:27 UTC │                     │
	│ ssh     │ -p NoKubernetes-855068 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-855068       │ jenkins │ v1.37.0 │ 15 Nov 25 10:27 UTC │                     │
	│ stop    │ -p NoKubernetes-855068                                                                                                                   │ NoKubernetes-855068       │ jenkins │ v1.37.0 │ 15 Nov 25 10:27 UTC │ 15 Nov 25 10:27 UTC │
	│ start   │ -p NoKubernetes-855068 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-855068       │ jenkins │ v1.37.0 │ 15 Nov 25 10:27 UTC │ 15 Nov 25 10:27 UTC │
	│ ssh     │ -p NoKubernetes-855068 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-855068       │ jenkins │ v1.37.0 │ 15 Nov 25 10:27 UTC │                     │
	│ delete  │ -p NoKubernetes-855068                                                                                                                   │ NoKubernetes-855068       │ jenkins │ v1.37.0 │ 15 Nov 25 10:27 UTC │ 15 Nov 25 10:27 UTC │
	│ start   │ -p missing-upgrade-229925 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-229925    │ jenkins │ v1.32.0 │ 15 Nov 25 10:27 UTC │                     │
	│ start   │ -p pause-642487 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-642487              │ jenkins │ v1.37.0 │ 15 Nov 25 10:27 UTC │ 15 Nov 25 10:27 UTC │
	│ delete  │ -p offline-crio-637291                                                                                                                   │ offline-crio-637291       │ jenkins │ v1.37.0 │ 15 Nov 25 10:27 UTC │ 15 Nov 25 10:27 UTC │
	│ start   │ -p kubernetes-upgrade-914881 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-914881 │ jenkins │ v1.37.0 │ 15 Nov 25 10:27 UTC │                     │
	│ pause   │ -p pause-642487 --alsologtostderr -v=5                                                                                                   │ pause-642487              │ jenkins │ v1.37.0 │ 15 Nov 25 10:27 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:27:38
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:27:38.134995  254667 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:27:38.135262  254667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:27:38.135271  254667 out.go:374] Setting ErrFile to fd 2...
	I1115 10:27:38.135275  254667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:27:38.135473  254667 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 10:27:38.135972  254667 out.go:368] Setting JSON to false
	I1115 10:27:38.136988  254667 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":7795,"bootTime":1763194663,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 10:27:38.137085  254667 start.go:143] virtualization: kvm guest
	I1115 10:27:38.138976  254667 out.go:179] * [kubernetes-upgrade-914881] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 10:27:38.140169  254667 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 10:27:38.140182  254667 notify.go:221] Checking for updates...
	I1115 10:27:38.142728  254667 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:27:38.144088  254667 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:27:38.145205  254667 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-55448/.minikube
	I1115 10:27:38.146421  254667 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 10:27:38.147382  254667 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:27:38.148825  254667 config.go:182] Loaded profile config "missing-upgrade-229925": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1115 10:27:38.148942  254667 config.go:182] Loaded profile config "pause-642487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:27:38.149046  254667 config.go:182] Loaded profile config "stopped-upgrade-567029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1115 10:27:38.149160  254667 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:27:38.189202  254667 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 10:27:38.189373  254667 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:27:38.245155  254667 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:56 SystemTime:2025-11-15 10:27:38.235518498 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:27:38.245269  254667 docker.go:319] overlay module found
	I1115 10:27:38.246993  254667 out.go:179] * Using the docker driver based on user configuration
	I1115 10:27:38.248229  254667 start.go:309] selected driver: docker
	I1115 10:27:38.248245  254667 start.go:930] validating driver "docker" against <nil>
	I1115 10:27:38.248257  254667 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:27:38.249153  254667 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:27:38.323500  254667 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:56 SystemTime:2025-11-15 10:27:38.314284648 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:27:38.323676  254667 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 10:27:38.323875  254667 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1115 10:27:38.325769  254667 out.go:179] * Using Docker driver with root privileges
	I1115 10:27:38.326798  254667 cni.go:84] Creating CNI manager for ""
	I1115 10:27:38.326861  254667 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:27:38.326872  254667 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 10:27:38.326935  254667 start.go:353] cluster config:
	{Name:kubernetes-upgrade-914881 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-914881 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:27:38.328195  254667 out.go:179] * Starting "kubernetes-upgrade-914881" primary control-plane node in "kubernetes-upgrade-914881" cluster
	I1115 10:27:38.329182  254667 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:27:38.330133  254667 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:27:38.331029  254667 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1115 10:27:38.331057  254667 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1115 10:27:38.331073  254667 cache.go:65] Caching tarball of preloaded images
	I1115 10:27:38.331143  254667 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:27:38.331160  254667 preload.go:238] Found /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 10:27:38.331178  254667 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1115 10:27:38.331286  254667 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/config.json ...
	I1115 10:27:38.331302  254667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/config.json: {Name:mkc062520ed9eead4ff3381037c44d504ca62a35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:27:38.350240  254667 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:27:38.350261  254667 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:27:38.350288  254667 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:27:38.350335  254667 start.go:360] acquireMachinesLock for kubernetes-upgrade-914881: {Name:mkc7cac26c6de5f12a63525aff7e026bda3aca7a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:27:38.350444  254667 start.go:364] duration metric: took 88.286µs to acquireMachinesLock for "kubernetes-upgrade-914881"
	I1115 10:27:38.350480  254667 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-914881 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-914881 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:27:38.350556  254667 start.go:125] createHost starting for "" (driver="docker")
	I1115 10:27:36.093334  252977 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.42 as a tarball
	I1115 10:27:36.093350  252977 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.42 from local cache
	I1115 10:27:37.575842  252977 cache.go:168] failed to download gcr.io/k8s-minikube/kicbase:v0.0.42, will try fallback image if available: error loading image: Error response from daemon: client version 1.43 is too old. Minimum supported API version is 1.44, please upgrade your client to a newer version
	I1115 10:27:37.575853  252977 image.go:79] Checking for docker.io/kicbase/stable:v0.0.42 in local docker daemon
	I1115 10:27:37.596645  252977 cache.go:149] Downloading docker.io/kicbase/stable:v0.0.42 to local cache
	I1115 10:27:37.596851  252977 image.go:63] Checking for docker.io/kicbase/stable:v0.0.42 in local cache directory
	I1115 10:27:37.596881  252977 image.go:118] Writing docker.io/kicbase/stable:v0.0.42 to local cache
	I1115 10:27:37.554782  253283 cli_runner.go:164] Run: docker network inspect pause-642487 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:27:37.572781  253283 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1115 10:27:37.577582  253283 kubeadm.go:884] updating cluster {Name:pause-642487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-642487 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:27:37.577759  253283 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:27:37.577813  253283 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:27:37.614088  253283 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:27:37.614111  253283 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:27:37.614166  253283 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:27:37.641539  253283 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:27:37.641574  253283 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:27:37.641585  253283 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1115 10:27:37.641715  253283 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-642487 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-642487 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:27:37.641858  253283 ssh_runner.go:195] Run: crio config
	I1115 10:27:37.690462  253283 cni.go:84] Creating CNI manager for ""
	I1115 10:27:37.690484  253283 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:27:37.690502  253283 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:27:37.690523  253283 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-642487 NodeName:pause-642487 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:27:37.690635  253283 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-642487"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:27:37.690694  253283 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:27:37.698812  253283 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:27:37.698886  253283 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:27:37.706583  253283 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1115 10:27:37.719419  253283 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:27:37.731798  253283 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1115 10:27:37.744098  253283 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:27:37.747771  253283 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:27:37.844646  253283 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:27:37.859000  253283 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/pause-642487 for IP: 192.168.76.2
	I1115 10:27:37.859023  253283 certs.go:195] generating shared ca certs ...
	I1115 10:27:37.859039  253283 certs.go:227] acquiring lock for ca certs: {Name:mkec8c0ab2b564f4fafe1f7fc86089efa994f6db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:27:37.859243  253283 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key
	I1115 10:27:37.859291  253283 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key
	I1115 10:27:37.859301  253283 certs.go:257] generating profile certs ...
	I1115 10:27:37.859379  253283 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/pause-642487/client.key
	I1115 10:27:37.859433  253283 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/pause-642487/apiserver.key.164c5544
	I1115 10:27:37.859466  253283 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/pause-642487/proxy-client.key
	I1115 10:27:37.859559  253283 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem (1338 bytes)
	W1115 10:27:37.859587  253283 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962_empty.pem, impossibly tiny 0 bytes
	I1115 10:27:37.859596  253283 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:27:37.859625  253283 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:27:37.859646  253283 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:27:37.859667  253283 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem (1679 bytes)
	I1115 10:27:37.859703  253283 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:27:37.860410  253283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:27:37.902780  253283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:27:37.921342  253283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:27:37.938781  253283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:27:37.955935  253283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/pause-642487/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1115 10:27:37.972651  253283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/pause-642487/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 10:27:37.990607  253283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/pause-642487/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:27:38.011467  253283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/pause-642487/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:27:38.062701  253283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /usr/share/ca-certificates/589622.pem (1708 bytes)
	I1115 10:27:38.157877  253283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:27:38.275682  253283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem --> /usr/share/ca-certificates/58962.pem (1338 bytes)
	I1115 10:27:38.375283  253283 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:27:38.462546  253283 ssh_runner.go:195] Run: openssl version
	I1115 10:27:38.471708  253283 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/589622.pem && ln -fs /usr/share/ca-certificates/589622.pem /etc/ssl/certs/589622.pem"
	I1115 10:27:38.483846  253283 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/589622.pem
	I1115 10:27:38.489014  253283 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:47 /usr/share/ca-certificates/589622.pem
	I1115 10:27:38.489126  253283 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/589622.pem
	I1115 10:27:38.587995  253283 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/589622.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:27:38.661038  253283 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:27:38.675786  253283 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:27:38.680146  253283 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:41 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:27:38.680199  253283 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:27:38.782245  253283 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:27:38.793941  253283 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/58962.pem && ln -fs /usr/share/ca-certificates/58962.pem /etc/ssl/certs/58962.pem"
	I1115 10:27:38.864858  253283 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/58962.pem
	I1115 10:27:38.869050  253283 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:47 /usr/share/ca-certificates/58962.pem
	I1115 10:27:38.869111  253283 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/58962.pem
	I1115 10:27:38.975335  253283 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/58962.pem /etc/ssl/certs/51391683.0"
	I1115 10:27:38.985622  253283 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:27:39.057166  253283 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 10:27:39.187706  253283 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 10:27:39.293663  253283 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 10:27:39.466575  253283 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 10:27:39.568779  253283 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 10:27:39.680429  253283 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 10:27:39.785676  253283 kubeadm.go:401] StartCluster: {Name:pause-642487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-642487 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:27:39.786136  253283 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:27:39.786233  253283 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:27:39.881968  253283 cri.go:89] found id: "a5225c7077c6f279cc61d6fc15cdb423b982b29412dd8a173b4bdc1c976af30b"
	I1115 10:27:39.882055  253283 cri.go:89] found id: "79929c6a2bd5c66ee1f199a1d7d3bd6158f0914fb6ede8beb6a05b4c765efed2"
	I1115 10:27:39.882062  253283 cri.go:89] found id: "3d25753d2c98879b7a05bcb03f07c48e41dbace9c81399d0d21f0952ab2ca92b"
	I1115 10:27:39.882067  253283 cri.go:89] found id: "602dacd6f75382f88ebd4b70e2712c63d504c2104f09e69e7ed06953540d3bd6"
	I1115 10:27:39.882071  253283 cri.go:89] found id: "0ef58fd5a90ee7a6981b544c76026e5556c7883732036cbb3edd93733ccf4a6c"
	I1115 10:27:39.882076  253283 cri.go:89] found id: "ad476564fba4d88d6b14f082586a864bcf39dadd4f5371f877dda5662a573cf6"
	I1115 10:27:39.882080  253283 cri.go:89] found id: "fb8520106a10f5510cb789cf55fa41e2dcef15b4e49b1e8e695b835e5d20c27f"
	I1115 10:27:39.882084  253283 cri.go:89] found id: "9cc96cc9a3498c97b9e9f90a49f184e0c4c09a5789e8c22374b75eefad9909ec"
	I1115 10:27:39.882088  253283 cri.go:89] found id: "8bc8fa4817aeb775ecf14d85b0146af1da01b6d0d4276cb3d4f7730073d8be92"
	I1115 10:27:39.882098  253283 cri.go:89] found id: "7816357c42fe9c2750e08cc8de3eaa7ada90dbdcaa66ef57679625b7d186d0b6"
	I1115 10:27:39.882102  253283 cri.go:89] found id: "f6ded8d1e4b3632bdec1a6e539a1f29b3c4950da7cb62c97ca951715e1ec6888"
	I1115 10:27:39.882106  253283 cri.go:89] found id: "3bc6859c7b0047c536c2b2ef1f446ac2fc769d38f898f17f61d309bf77faba98"
	I1115 10:27:39.882110  253283 cri.go:89] found id: "212c7652d9dd7a0609b346e2da303aaeeb5bbe32e002bba5a7822c363e37bab1"
	I1115 10:27:39.882132  253283 cri.go:89] found id: "4e966f58ab607d5bbe97198f865699126b593ea8fce93b6ac7fa51dbf052e144"
	I1115 10:27:39.882136  253283 cri.go:89] found id: ""
	I1115 10:27:39.882183  253283 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 10:27:39.900176  253283 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:27:39Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:27:39.900259  253283 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:27:39.964647  253283 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 10:27:39.964670  253283 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 10:27:39.964808  253283 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 10:27:39.975584  253283 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:27:39.976348  253283 kubeconfig.go:125] found "pause-642487" server: "https://192.168.76.2:8443"
	I1115 10:27:39.977118  253283 kapi.go:59] client config for pause-642487: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-55448/.minikube/profiles/pause-642487/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-55448/.minikube/profiles/pause-642487/client.key", CAFile:"/home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 10:27:39.977645  253283 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1115 10:27:39.977660  253283 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1115 10:27:39.977667  253283 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1115 10:27:39.977673  253283 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1115 10:27:39.977682  253283 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1115 10:27:39.978236  253283 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 10:27:39.991964  253283 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1115 10:27:39.991999  253283 kubeadm.go:602] duration metric: took 27.323097ms to restartPrimaryControlPlane
	I1115 10:27:39.992009  253283 kubeadm.go:403] duration metric: took 206.34691ms to StartCluster
	I1115 10:27:39.992028  253283 settings.go:142] acquiring lock: {Name:mk94cfac0f6eef2479181468a7a8082a6e5c8f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:27:39.992086  253283 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:27:39.992864  253283 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:27:39.993140  253283 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:27:39.993414  253283 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:27:39.993634  253283 config.go:182] Loaded profile config "pause-642487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:27:39.997725  253283 out.go:179] * Enabled addons: 
	I1115 10:27:39.997900  253283 out.go:179] * Verifying Kubernetes components...
	I1115 10:27:37.713862  250315 out.go:204] * Another minikube instance is downloading dependencies... 
	I1115 10:27:38.352196  254667 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 10:27:38.352388  254667 start.go:159] libmachine.API.Create for "kubernetes-upgrade-914881" (driver="docker")
	I1115 10:27:38.352413  254667 client.go:173] LocalClient.Create starting
	I1115 10:27:38.352526  254667 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem
	I1115 10:27:38.352561  254667 main.go:143] libmachine: Decoding PEM data...
	I1115 10:27:38.352578  254667 main.go:143] libmachine: Parsing certificate...
	I1115 10:27:38.352624  254667 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem
	I1115 10:27:38.352642  254667 main.go:143] libmachine: Decoding PEM data...
	I1115 10:27:38.352656  254667 main.go:143] libmachine: Parsing certificate...
	I1115 10:27:38.352942  254667 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-914881 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 10:27:38.378137  254667 cli_runner.go:211] docker network inspect kubernetes-upgrade-914881 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 10:27:38.378243  254667 network_create.go:284] running [docker network inspect kubernetes-upgrade-914881] to gather additional debugging logs...
	I1115 10:27:38.378276  254667 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-914881
	W1115 10:27:38.397820  254667 cli_runner.go:211] docker network inspect kubernetes-upgrade-914881 returned with exit code 1
	I1115 10:27:38.397859  254667 network_create.go:287] error running [docker network inspect kubernetes-upgrade-914881]: docker network inspect kubernetes-upgrade-914881: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-914881 not found
	I1115 10:27:38.397879  254667 network_create.go:289] output of [docker network inspect kubernetes-upgrade-914881]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-914881 not found
	
	** /stderr **
	I1115 10:27:38.398042  254667 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:27:38.415280  254667 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-77644897380e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:42:f9:e2:83:c3:91} reservation:<nil>}
	I1115 10:27:38.415639  254667 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e808d03b2052 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:03:24:ef:92:a1} reservation:<nil>}
	I1115 10:27:38.415973  254667 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3fb45eeaa66e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:1a:b3:b7:0f:2e:99} reservation:<nil>}
	I1115 10:27:38.416314  254667 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-45fa74c79adc IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:2e:55:cb:22:c6:84} reservation:<nil>}
	I1115 10:27:38.416762  254667 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ea8210}
	I1115 10:27:38.416788  254667 network_create.go:124] attempt to create docker network kubernetes-upgrade-914881 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1115 10:27:38.416835  254667 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-914881 kubernetes-upgrade-914881
	I1115 10:27:38.463994  254667 network_create.go:108] docker network kubernetes-upgrade-914881 192.168.85.0/24 created
	I1115 10:27:38.464027  254667 kic.go:121] calculated static IP "192.168.85.2" for the "kubernetes-upgrade-914881" container
	I1115 10:27:38.464112  254667 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 10:27:38.485021  254667 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-914881 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-914881 --label created_by.minikube.sigs.k8s.io=true
	I1115 10:27:38.505381  254667 oci.go:103] Successfully created a docker volume kubernetes-upgrade-914881
	I1115 10:27:38.505470  254667 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-914881-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-914881 --entrypoint /usr/bin/test -v kubernetes-upgrade-914881:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 10:27:38.939065  254667 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-914881
	I1115 10:27:38.939162  254667 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1115 10:27:38.939182  254667 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 10:27:38.939259  254667 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-914881:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1115 10:27:41.817238  250315 image.go:63] Checking for docker.io/kicbase/stable:v0.0.42 in local cache directory
	I1115 10:27:41.817278  250315 image.go:66] Found docker.io/kicbase/stable:v0.0.42 in local cache directory, skipping pull
	I1115 10:27:41.817285  250315 image.go:105] docker.io/kicbase/stable:v0.0.42 exists in cache, skipping pull
	I1115 10:27:41.817302  250315 cache.go:152] successfully saved docker.io/kicbase/stable:v0.0.42 as a tarball
	I1115 10:27:41.817308  250315 cache.go:162] Loading docker.io/kicbase/stable:v0.0.42 from local cache
	I1115 10:27:43.305612  250315 cache.go:168] failed to download docker.io/kicbase/stable:v0.0.42, will try fallback image if available: error loading image: Error response from daemon: client version 1.43 is too old. Minimum supported API version is 1.44, please upgrade your client to a newer version
	E1115 10:27:43.305640  250315 cache.go:189] Error downloading kic artifacts:  failed to download kic base image or any fallback image
	I1115 10:27:43.305660  250315 cache.go:194] Successfully downloaded all kic artifacts
	I1115 10:27:43.305719  250315 start.go:365] acquiring machines lock for stopped-upgrade-567029: {Name:mk5336ae4d4d03321c8790135af3351b26bbd5f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:27:43.305858  250315 start.go:369] acquired machines lock for "stopped-upgrade-567029" in 117.264µs
	I1115 10:27:43.305901  250315 start.go:93] Provisioning new machine with config: &{Name:stopped-upgrade-567029 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-567029 Namespace:default APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: S
taticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:27:43.306025  250315 start.go:125] createHost starting for "" (driver="docker")
	I1115 10:27:41.817086  252977 cache.go:152] successfully saved docker.io/kicbase/stable:v0.0.42 as a tarball
	I1115 10:27:41.817099  252977 cache.go:162] Loading docker.io/kicbase/stable:v0.0.42 from local cache
	I1115 10:27:43.338074  252977 cache.go:168] failed to download docker.io/kicbase/stable:v0.0.42, will try fallback image if available: error loading image: Error response from daemon: client version 1.43 is too old. Minimum supported API version is 1.44, please upgrade your client to a newer version
	E1115 10:27:43.338104  252977 cache.go:189] Error downloading kic artifacts:  failed to download kic base image or any fallback image
	I1115 10:27:43.338124  252977 cache.go:194] Successfully downloaded all kic artifacts
	I1115 10:27:43.338174  252977 start.go:365] acquiring machines lock for missing-upgrade-229925: {Name:mkeb4f626477dd186111fb07e6d25c72f7129196 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:27:43.338284  252977 start.go:369] acquired machines lock for "missing-upgrade-229925" in 94.795µs
	I1115 10:27:43.338306  252977 start.go:93] Provisioning new machine with config: &{Name:missing-upgrade-229925 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:missing-upgrade-229925 Namespace:default APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: S
taticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:27:43.338369  252977 start.go:125] createHost starting for "" (driver="docker")
	I1115 10:27:43.374820  252977 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 10:27:43.375196  252977 start.go:159] libmachine.API.Create for "missing-upgrade-229925" (driver="docker")
	I1115 10:27:43.375265  252977 client.go:168] LocalClient.Create starting
	I1115 10:27:43.375355  252977 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem
	I1115 10:27:43.375405  252977 main.go:141] libmachine: Decoding PEM data...
	I1115 10:27:43.375431  252977 main.go:141] libmachine: Parsing certificate...
	I1115 10:27:43.375513  252977 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem
	I1115 10:27:43.375549  252977 main.go:141] libmachine: Decoding PEM data...
	I1115 10:27:43.375561  252977 main.go:141] libmachine: Parsing certificate...
	I1115 10:27:43.376084  252977 cli_runner.go:164] Run: docker network inspect missing-upgrade-229925 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 10:27:43.395535  252977 cli_runner.go:211] docker network inspect missing-upgrade-229925 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 10:27:43.395645  252977 network_create.go:281] running [docker network inspect missing-upgrade-229925] to gather additional debugging logs...
	I1115 10:27:43.395664  252977 cli_runner.go:164] Run: docker network inspect missing-upgrade-229925
	W1115 10:27:43.412435  252977 cli_runner.go:211] docker network inspect missing-upgrade-229925 returned with exit code 1
	I1115 10:27:43.412460  252977 network_create.go:284] error running [docker network inspect missing-upgrade-229925]: docker network inspect missing-upgrade-229925: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-229925 not found
	I1115 10:27:43.412475  252977 network_create.go:286] output of [docker network inspect missing-upgrade-229925]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-229925 not found
	
	** /stderr **
	I1115 10:27:43.412574  252977 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:27:43.431071  252977 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-77644897380e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:42:f9:e2:83:c3:91} reservation:<nil>}
	I1115 10:27:43.431752  252977 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e808d03b2052 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:03:24:ef:92:a1} reservation:<nil>}
	I1115 10:27:43.432405  252977 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3fb45eeaa66e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:1a:b3:b7:0f:2e:99} reservation:<nil>}
	I1115 10:27:43.432980  252977 network.go:214] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-45fa74c79adc IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:2e:55:cb:22:c6:84} reservation:<nil>}
	I1115 10:27:43.433474  252977 network.go:214] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-440e841b6fd0 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:b2:17:71:11:a4:04} reservation:<nil>}
	I1115 10:27:43.434103  252977 network.go:209] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022a07f0}
	I1115 10:27:43.434131  252977 network_create.go:124] attempt to create docker network missing-upgrade-229925 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1115 10:27:43.434177  252977 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-229925 missing-upgrade-229925
	I1115 10:27:43.697934  252977 network_create.go:108] docker network missing-upgrade-229925 192.168.94.0/24 created
	I1115 10:27:43.697987  252977 kic.go:121] calculated static IP "192.168.94.2" for the "missing-upgrade-229925" container
	I1115 10:27:43.698067  252977 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 10:27:43.717462  252977 cli_runner.go:164] Run: docker volume create missing-upgrade-229925 --label name.minikube.sigs.k8s.io=missing-upgrade-229925 --label created_by.minikube.sigs.k8s.io=true
	I1115 10:27:39.999193  253283 addons.go:515] duration metric: took 5.782194ms for enable addons: enabled=[]
	I1115 10:27:39.999238  253283 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:27:40.391636  253283 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:27:40.469203  253283 node_ready.go:35] waiting up to 6m0s for node "pause-642487" to be "Ready" ...
	I1115 10:27:42.371811  253283 node_ready.go:49] node "pause-642487" is "Ready"
	I1115 10:27:42.371845  253283 node_ready.go:38] duration metric: took 1.902605378s for node "pause-642487" to be "Ready" ...
	I1115 10:27:42.371859  253283 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:27:42.371913  253283 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:27:42.457526  253283 api_server.go:72] duration metric: took 2.464329595s to wait for apiserver process to appear ...
	I1115 10:27:42.457621  253283 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:27:42.457657  253283 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:27:42.468975  253283 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W1115 10:27:42.469020  253283 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I1115 10:27:42.958724  253283 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:27:42.963165  253283 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:27:42.963196  253283 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:27:43.457831  253283 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:27:43.463627  253283 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:27:43.463658  253283 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:27:43.957865  253283 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:27:43.961677  253283 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:27:43.961698  253283 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:27:44.458384  253283 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:27:44.462587  253283 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:27:44.462615  253283 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:27:43.374851  250315 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 10:27:43.375263  250315 start.go:159] libmachine.API.Create for "stopped-upgrade-567029" (driver="docker")
	I1115 10:27:43.375310  250315 client.go:168] LocalClient.Create starting
	I1115 10:27:43.375674  250315 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem
	I1115 10:27:43.375738  250315 main.go:141] libmachine: Decoding PEM data...
	I1115 10:27:43.375758  250315 main.go:141] libmachine: Parsing certificate...
	I1115 10:27:43.375873  250315 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem
	I1115 10:27:43.375914  250315 main.go:141] libmachine: Decoding PEM data...
	I1115 10:27:43.375928  250315 main.go:141] libmachine: Parsing certificate...
	I1115 10:27:43.376470  250315 cli_runner.go:164] Run: docker network inspect stopped-upgrade-567029 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 10:27:43.395172  250315 cli_runner.go:211] docker network inspect stopped-upgrade-567029 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 10:27:43.395246  250315 network_create.go:281] running [docker network inspect stopped-upgrade-567029] to gather additional debugging logs...
	I1115 10:27:43.395260  250315 cli_runner.go:164] Run: docker network inspect stopped-upgrade-567029
	W1115 10:27:43.412700  250315 cli_runner.go:211] docker network inspect stopped-upgrade-567029 returned with exit code 1
	I1115 10:27:43.412721  250315 network_create.go:284] error running [docker network inspect stopped-upgrade-567029]: docker network inspect stopped-upgrade-567029: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network stopped-upgrade-567029 not found
	I1115 10:27:43.412736  250315 network_create.go:286] output of [docker network inspect stopped-upgrade-567029]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network stopped-upgrade-567029 not found
	
	** /stderr **
	I1115 10:27:43.412843  250315 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:27:43.430507  250315 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-77644897380e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:42:f9:e2:83:c3:91} reservation:<nil>}
	I1115 10:27:43.431143  250315 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e808d03b2052 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:03:24:ef:92:a1} reservation:<nil>}
	I1115 10:27:43.431842  250315 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3fb45eeaa66e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:1a:b3:b7:0f:2e:99} reservation:<nil>}
	I1115 10:27:43.432536  250315 network.go:214] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-45fa74c79adc IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:2e:55:cb:22:c6:84} reservation:<nil>}
	I1115 10:27:43.433289  250315 network.go:214] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-440e841b6fd0 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:b2:17:71:11:a4:04} reservation:<nil>}
	I1115 10:27:43.435173  250315 network.go:212] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1115 10:27:43.435775  250315 network.go:209] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002e24a30}
	I1115 10:27:43.435792  250315 network_create.go:124] attempt to create docker network stopped-upgrade-567029 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1115 10:27:43.435834  250315 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=stopped-upgrade-567029 stopped-upgrade-567029
	I1115 10:27:43.708414  250315 network_create.go:108] docker network stopped-upgrade-567029 192.168.103.0/24 created
	I1115 10:27:43.708469  250315 kic.go:121] calculated static IP "192.168.103.2" for the "stopped-upgrade-567029" container
	I1115 10:27:43.708547  250315 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 10:27:43.727521  250315 cli_runner.go:164] Run: docker volume create stopped-upgrade-567029 --label name.minikube.sigs.k8s.io=stopped-upgrade-567029 --label created_by.minikube.sigs.k8s.io=true
	I1115 10:27:43.752646  250315 oci.go:103] Successfully created a docker volume stopped-upgrade-567029
	I1115 10:27:43.752732  250315 cli_runner.go:164] Run: docker run --rm --name stopped-upgrade-567029-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=stopped-upgrade-567029 --entrypoint /usr/bin/test -v stopped-upgrade-567029:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1115 10:27:45.786630  254667 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-914881:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (6.847280473s)
	I1115 10:27:45.786668  254667 kic.go:203] duration metric: took 6.84748174s to extract preloaded images to volume ...
	W1115 10:27:45.786841  254667 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1115 10:27:45.787011  254667 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 10:27:45.855097  254667 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-914881 --name kubernetes-upgrade-914881 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-914881 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-914881 --network kubernetes-upgrade-914881 --ip 192.168.85.2 --volume kubernetes-upgrade-914881:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 10:27:46.160541  254667 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-914881 --format={{.State.Running}}
	I1115 10:27:46.178349  254667 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-914881 --format={{.State.Status}}
	I1115 10:27:46.196723  254667 cli_runner.go:164] Run: docker exec kubernetes-upgrade-914881 stat /var/lib/dpkg/alternatives/iptables
	I1115 10:27:46.243070  254667 oci.go:144] the created container "kubernetes-upgrade-914881" has a running status.
	I1115 10:27:46.243100  254667 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/kubernetes-upgrade-914881/id_rsa...
	I1115 10:27:47.102486  254667 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21894-55448/.minikube/machines/kubernetes-upgrade-914881/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 10:27:47.125603  254667 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-914881 --format={{.State.Status}}
	I1115 10:27:47.142848  254667 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 10:27:47.142868  254667 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-914881 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 10:27:47.190891  254667 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-914881 --format={{.State.Status}}
	I1115 10:27:47.208000  254667 machine.go:94] provisionDockerMachine start ...
	I1115 10:27:47.208103  254667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-914881
	I1115 10:27:47.224325  254667 main.go:143] libmachine: Using SSH client type: native
	I1115 10:27:47.224573  254667 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33004 <nil> <nil>}
	I1115 10:27:47.224587  254667 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:27:47.351864  254667 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-914881
	
	I1115 10:27:47.351893  254667 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-914881"
	I1115 10:27:47.351996  254667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-914881
	I1115 10:27:47.371600  254667 main.go:143] libmachine: Using SSH client type: native
	I1115 10:27:47.371893  254667 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33004 <nil> <nil>}
	I1115 10:27:47.371915  254667 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-914881 && echo "kubernetes-upgrade-914881" | sudo tee /etc/hostname
	I1115 10:27:47.518404  254667 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-914881
	
	I1115 10:27:47.518474  254667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-914881
	I1115 10:27:47.535215  254667 main.go:143] libmachine: Using SSH client type: native
	I1115 10:27:47.535461  254667 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33004 <nil> <nil>}
	I1115 10:27:47.535490  254667 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-914881' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-914881/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-914881' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:27:47.662480  254667 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:27:47.662516  254667 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-55448/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-55448/.minikube}
	I1115 10:27:47.662558  254667 ubuntu.go:190] setting up certificates
	I1115 10:27:47.662574  254667 provision.go:84] configureAuth start
	I1115 10:27:47.662636  254667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-914881
	I1115 10:27:47.679273  254667 provision.go:143] copyHostCerts
	I1115 10:27:47.679336  254667 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem, removing ...
	I1115 10:27:47.679348  254667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem
	I1115 10:27:47.679412  254667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem (1082 bytes)
	I1115 10:27:47.679496  254667 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem, removing ...
	I1115 10:27:47.679507  254667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem
	I1115 10:27:47.679533  254667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem (1123 bytes)
	I1115 10:27:47.679588  254667 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem, removing ...
	I1115 10:27:47.679594  254667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem
	I1115 10:27:47.679615  254667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem (1679 bytes)
	I1115 10:27:47.679671  254667 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-914881 san=[127.0.0.1 192.168.85.2 kubernetes-upgrade-914881 localhost minikube]
	I1115 10:27:47.764722  254667 provision.go:177] copyRemoteCerts
	I1115 10:27:47.764794  254667 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:27:47.764830  254667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-914881
	I1115 10:27:47.781830  254667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/kubernetes-upgrade-914881/id_rsa Username:docker}
	I1115 10:27:47.877066  254667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1115 10:27:47.900835  254667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1115 10:27:47.919996  254667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:27:47.938762  254667 provision.go:87] duration metric: took 276.168397ms to configureAuth
	I1115 10:27:47.938793  254667 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:27:47.939024  254667 config.go:182] Loaded profile config "kubernetes-upgrade-914881": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1115 10:27:47.939154  254667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-914881
	I1115 10:27:47.958377  254667 main.go:143] libmachine: Using SSH client type: native
	I1115 10:27:47.958722  254667 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33004 <nil> <nil>}
	I1115 10:27:47.958750  254667 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:27:43.751740  252977 oci.go:103] Successfully created a docker volume missing-upgrade-229925
	I1115 10:27:43.754521  252977 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-229925-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-229925 --entrypoint /usr/bin/test -v missing-upgrade-229925:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1115 10:27:44.958434  253283 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:27:44.962565  253283 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:27:44.962596  253283 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:27:45.458211  253283 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:27:45.462489  253283 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:27:45.462516  253283 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:27:45.958128  253283 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:27:45.962051  253283 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1115 10:27:45.962944  253283 api_server.go:141] control plane version: v1.34.1
	I1115 10:27:45.962981  253283 api_server.go:131] duration metric: took 3.505340863s to wait for apiserver health ...
	I1115 10:27:45.962991  253283 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:27:45.965940  253283 system_pods.go:59] 7 kube-system pods found
	I1115 10:27:45.965995  253283 system_pods.go:61] "coredns-66bc5c9577-8nbgb" [9f4b3526-3889-4bc5-81e0-cbab60c70c2d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:27:45.966007  253283 system_pods.go:61] "etcd-pause-642487" [8d12057d-2b14-41f1-b978-6ae6055dd411] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:27:45.966013  253283 system_pods.go:61] "kindnet-jh5hv" [17e3aa21-2ac9-4ce2-9a63-54e13281bde5] Running
	I1115 10:27:45.966023  253283 system_pods.go:61] "kube-apiserver-pause-642487" [d989168d-4a2d-4db8-a4ac-068d4907344c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:27:45.966031  253283 system_pods.go:61] "kube-controller-manager-pause-642487" [f8184dcf-1283-40bb-bf91-612578293a38] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:27:45.966038  253283 system_pods.go:61] "kube-proxy-jhknt" [66848a4f-7a86-4b64-adb1-2ebb61ff9ddc] Running
	I1115 10:27:45.966043  253283 system_pods.go:61] "kube-scheduler-pause-642487" [b34e6f2e-392d-4b42-be08-6b7bf986de7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:27:45.966052  253283 system_pods.go:74] duration metric: took 3.052477ms to wait for pod list to return data ...
	I1115 10:27:45.966063  253283 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:27:45.968084  253283 default_sa.go:45] found service account: "default"
	I1115 10:27:45.968103  253283 default_sa.go:55] duration metric: took 2.031853ms for default service account to be created ...
	I1115 10:27:45.968111  253283 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:27:45.970452  253283 system_pods.go:86] 7 kube-system pods found
	I1115 10:27:45.970484  253283 system_pods.go:89] "coredns-66bc5c9577-8nbgb" [9f4b3526-3889-4bc5-81e0-cbab60c70c2d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:27:45.970492  253283 system_pods.go:89] "etcd-pause-642487" [8d12057d-2b14-41f1-b978-6ae6055dd411] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:27:45.970497  253283 system_pods.go:89] "kindnet-jh5hv" [17e3aa21-2ac9-4ce2-9a63-54e13281bde5] Running
	I1115 10:27:45.970502  253283 system_pods.go:89] "kube-apiserver-pause-642487" [d989168d-4a2d-4db8-a4ac-068d4907344c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:27:45.970508  253283 system_pods.go:89] "kube-controller-manager-pause-642487" [f8184dcf-1283-40bb-bf91-612578293a38] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:27:45.970514  253283 system_pods.go:89] "kube-proxy-jhknt" [66848a4f-7a86-4b64-adb1-2ebb61ff9ddc] Running
	I1115 10:27:45.970519  253283 system_pods.go:89] "kube-scheduler-pause-642487" [b34e6f2e-392d-4b42-be08-6b7bf986de7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:27:45.970532  253283 system_pods.go:126] duration metric: took 2.414128ms to wait for k8s-apps to be running ...
	I1115 10:27:45.970545  253283 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:27:45.970592  253283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:27:45.984334  253283 system_svc.go:56] duration metric: took 13.778291ms WaitForService to wait for kubelet
	I1115 10:27:45.984365  253283 kubeadm.go:587] duration metric: took 5.991197356s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:27:45.984387  253283 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:27:45.987043  253283 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:27:45.987072  253283 node_conditions.go:123] node cpu capacity is 8
	I1115 10:27:45.987086  253283 node_conditions.go:105] duration metric: took 2.692928ms to run NodePressure ...
	I1115 10:27:45.987102  253283 start.go:242] waiting for startup goroutines ...
	I1115 10:27:45.987118  253283 start.go:247] waiting for cluster config update ...
	I1115 10:27:45.987133  253283 start.go:256] writing updated cluster config ...
	I1115 10:27:45.987422  253283 ssh_runner.go:195] Run: rm -f paused
	I1115 10:27:45.990985  253283 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:27:45.991414  253283 kapi.go:59] client config for pause-642487: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-55448/.minikube/profiles/pause-642487/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-55448/.minikube/profiles/pause-642487/client.key", CAFile:"/home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 10:27:45.993575  253283 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8nbgb" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 10:27:47.999106  253283 pod_ready.go:104] pod "coredns-66bc5c9577-8nbgb" is not "Ready", error: <nil>
	I1115 10:27:48.205288  254667 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:27:48.205310  254667 machine.go:97] duration metric: took 997.280736ms to provisionDockerMachine
	I1115 10:27:48.205320  254667 client.go:176] duration metric: took 9.852898594s to LocalClient.Create
	I1115 10:27:48.205341  254667 start.go:167] duration metric: took 9.852953632s to libmachine.API.Create "kubernetes-upgrade-914881"
	I1115 10:27:48.205348  254667 start.go:293] postStartSetup for "kubernetes-upgrade-914881" (driver="docker")
	I1115 10:27:48.205361  254667 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:27:48.205422  254667 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:27:48.205463  254667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-914881
	I1115 10:27:48.222450  254667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/kubernetes-upgrade-914881/id_rsa Username:docker}
	I1115 10:27:48.318261  254667 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:27:48.322250  254667 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:27:48.322279  254667 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:27:48.322292  254667 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/addons for local assets ...
	I1115 10:27:48.322352  254667 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/files for local assets ...
	I1115 10:27:48.322464  254667 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem -> 589622.pem in /etc/ssl/certs
	I1115 10:27:48.322600  254667 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:27:48.330387  254667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:27:48.353166  254667 start.go:296] duration metric: took 147.798785ms for postStartSetup
	I1115 10:27:48.353561  254667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-914881
	I1115 10:27:48.372622  254667 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/config.json ...
	I1115 10:27:48.372880  254667 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:27:48.372932  254667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-914881
	I1115 10:27:48.391647  254667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/kubernetes-upgrade-914881/id_rsa Username:docker}
	I1115 10:27:48.487219  254667 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:27:48.492802  254667 start.go:128] duration metric: took 10.142230633s to createHost
	I1115 10:27:48.492830  254667 start.go:83] releasing machines lock for "kubernetes-upgrade-914881", held for 10.142367049s
	I1115 10:27:48.492902  254667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-914881
	I1115 10:27:48.511141  254667 ssh_runner.go:195] Run: cat /version.json
	I1115 10:27:48.511202  254667 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:27:48.511258  254667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-914881
	I1115 10:27:48.511203  254667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-914881
	I1115 10:27:48.528738  254667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/kubernetes-upgrade-914881/id_rsa Username:docker}
	I1115 10:27:48.528985  254667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/kubernetes-upgrade-914881/id_rsa Username:docker}
	I1115 10:27:48.680757  254667 ssh_runner.go:195] Run: systemctl --version
	I1115 10:27:48.687292  254667 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:27:48.721053  254667 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:27:48.725744  254667 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:27:48.725800  254667 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:27:48.750323  254667 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1115 10:27:48.750354  254667 start.go:496] detecting cgroup driver to use...
	I1115 10:27:48.750391  254667 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:27:48.750439  254667 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:27:48.765399  254667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:27:48.777585  254667 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:27:48.777648  254667 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:27:48.793032  254667 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:27:48.809028  254667 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:27:48.906462  254667 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:27:49.001586  254667 docker.go:234] disabling docker service ...
	I1115 10:27:49.001647  254667 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:27:49.020352  254667 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:27:49.032487  254667 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:27:49.134766  254667 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:27:49.219392  254667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:27:49.232446  254667 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:27:49.246035  254667 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1115 10:27:49.246100  254667 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:27:49.256066  254667 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:27:49.256121  254667 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:27:49.264478  254667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:27:49.272817  254667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:27:49.281545  254667 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:27:49.289205  254667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:27:49.297519  254667 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:27:49.311352  254667 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:27:49.320035  254667 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:27:49.327113  254667 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:27:49.334281  254667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:27:49.424343  254667 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:27:49.759791  254667 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:27:49.759897  254667 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:27:49.763971  254667 start.go:564] Will wait 60s for crictl version
	I1115 10:27:49.764033  254667 ssh_runner.go:195] Run: which crictl
	I1115 10:27:49.767517  254667 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:27:49.790125  254667 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:27:49.790193  254667 ssh_runner.go:195] Run: crio --version
	I1115 10:27:49.816493  254667 ssh_runner.go:195] Run: crio --version
	I1115 10:27:49.844287  254667 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1115 10:27:49.845502  254667 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-914881 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:27:49.861680  254667 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1115 10:27:49.865661  254667 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:27:49.877199  254667 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-914881 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-914881 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:27:49.877374  254667 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1115 10:27:49.877430  254667 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:27:49.916370  254667 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:27:49.916399  254667 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:27:49.916455  254667 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:27:49.943783  254667 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:27:49.943809  254667 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:27:49.943820  254667 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1115 10:27:49.943931  254667 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-914881 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-914881 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:27:49.944034  254667 ssh_runner.go:195] Run: crio config
	I1115 10:27:49.994837  254667 cni.go:84] Creating CNI manager for ""
	I1115 10:27:49.994863  254667 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:27:49.994887  254667 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:27:49.994913  254667 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-914881 NodeName:kubernetes-upgrade-914881 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:27:49.995092  254667 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-914881"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:27:49.995177  254667 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1115 10:27:50.004130  254667 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:27:50.004198  254667 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:27:50.011936  254667 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1115 10:27:50.024324  254667 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:27:50.038924  254667 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I1115 10:27:50.051575  254667 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:27:50.055042  254667 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:27:50.064612  254667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:27:50.153630  254667 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:27:50.182999  254667 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881 for IP: 192.168.85.2
	I1115 10:27:50.183028  254667 certs.go:195] generating shared ca certs ...
	I1115 10:27:50.183050  254667 certs.go:227] acquiring lock for ca certs: {Name:mkec8c0ab2b564f4fafe1f7fc86089efa994f6db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:27:50.183237  254667 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key
	I1115 10:27:50.183306  254667 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key
	I1115 10:27:50.183322  254667 certs.go:257] generating profile certs ...
	I1115 10:27:50.183395  254667 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/client.key
	I1115 10:27:50.183423  254667 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/client.crt with IP's: []
	I1115 10:27:50.395529  254667 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/client.crt ...
	I1115 10:27:50.395561  254667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/client.crt: {Name:mk573464f21868e08a58dc2e57c10697a3e4721a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:27:50.395733  254667 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/client.key ...
	I1115 10:27:50.395746  254667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/client.key: {Name:mk2905b5bd443614626e6860396d60c0ab5adc65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:27:50.395823  254667 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/apiserver.key.440325a9
	I1115 10:27:50.395839  254667 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/apiserver.crt.440325a9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1115 10:27:50.955042  254667 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/apiserver.crt.440325a9 ...
	I1115 10:27:50.955072  254667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/apiserver.crt.440325a9: {Name:mk8a5f94563e3ebbc844e953848ac9e54091501b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:27:50.955275  254667 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/apiserver.key.440325a9 ...
	I1115 10:27:50.955297  254667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/apiserver.key.440325a9: {Name:mk8e02912c9e23cba80d97b436b974f14f651eab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:27:50.955418  254667 certs.go:382] copying /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/apiserver.crt.440325a9 -> /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/apiserver.crt
	I1115 10:27:50.955526  254667 certs.go:386] copying /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/apiserver.key.440325a9 -> /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/apiserver.key
	I1115 10:27:50.955616  254667 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/proxy-client.key
	I1115 10:27:50.955638  254667 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/proxy-client.crt with IP's: []
	I1115 10:27:51.092835  254667 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/proxy-client.crt ...
	I1115 10:27:51.092863  254667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/proxy-client.crt: {Name:mk2b5bfc15d7bfb5cfc4a1da4ee531587d453a54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:27:51.093072  254667 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/proxy-client.key ...
	I1115 10:27:51.093092  254667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/proxy-client.key: {Name:mk1f67f5d6e5f744fe14c4d47c5b32ed3babc210 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:27:51.093326  254667 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem (1338 bytes)
	W1115 10:27:51.093371  254667 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962_empty.pem, impossibly tiny 0 bytes
	I1115 10:27:51.093389  254667 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:27:51.093429  254667 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:27:51.093462  254667 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:27:51.093497  254667 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem (1679 bytes)
	I1115 10:27:51.093551  254667 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:27:51.094237  254667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:27:51.113018  254667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:27:51.129793  254667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:27:51.146567  254667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:27:51.163226  254667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1115 10:27:51.180263  254667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 10:27:51.197808  254667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:27:51.220575  254667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 10:27:51.240269  254667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:27:51.263761  254667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem --> /usr/share/ca-certificates/58962.pem (1338 bytes)
	I1115 10:27:51.281699  254667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /usr/share/ca-certificates/589622.pem (1708 bytes)
	I1115 10:27:51.302484  254667 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:27:51.315629  254667 ssh_runner.go:195] Run: openssl version
	I1115 10:27:51.322346  254667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:27:51.331251  254667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:27:51.335462  254667 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:41 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:27:51.335514  254667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:27:51.370131  254667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:27:51.378463  254667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/58962.pem && ln -fs /usr/share/ca-certificates/58962.pem /etc/ssl/certs/58962.pem"
	I1115 10:27:51.387350  254667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/58962.pem
	I1115 10:27:51.391076  254667 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:47 /usr/share/ca-certificates/58962.pem
	I1115 10:27:51.391132  254667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/58962.pem
	I1115 10:27:51.425820  254667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/58962.pem /etc/ssl/certs/51391683.0"
	I1115 10:27:51.434246  254667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/589622.pem && ln -fs /usr/share/ca-certificates/589622.pem /etc/ssl/certs/589622.pem"
	I1115 10:27:51.442466  254667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/589622.pem
	I1115 10:27:51.446401  254667 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:47 /usr/share/ca-certificates/589622.pem
	I1115 10:27:51.446462  254667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/589622.pem
	I1115 10:27:51.480505  254667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/589622.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:27:51.488946  254667 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:27:51.492791  254667 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 10:27:51.492853  254667 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-914881 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-914881 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:27:51.492982  254667 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:27:51.493035  254667 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:27:51.520878  254667 cri.go:89] found id: ""
	I1115 10:27:51.520948  254667 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:27:51.528888  254667 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 10:27:51.536525  254667 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 10:27:51.536575  254667 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 10:27:51.544138  254667 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 10:27:51.544156  254667 kubeadm.go:158] found existing configuration files:
	
	I1115 10:27:51.544219  254667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 10:27:51.552035  254667 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 10:27:51.552101  254667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 10:27:51.559261  254667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 10:27:51.566725  254667 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 10:27:51.566783  254667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 10:27:51.573569  254667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 10:27:51.581012  254667 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 10:27:51.581065  254667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 10:27:51.587845  254667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 10:27:51.594922  254667 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 10:27:51.595000  254667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 10:27:51.601799  254667 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 10:27:51.654293  254667 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1115 10:27:51.654375  254667 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 10:27:51.692047  254667 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 10:27:51.692138  254667 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1115 10:27:51.692200  254667 kubeadm.go:319] OS: Linux
	I1115 10:27:51.692266  254667 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 10:27:51.692326  254667 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1115 10:27:51.692393  254667 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 10:27:51.692458  254667 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 10:27:51.692528  254667 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 10:27:51.692594  254667 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 10:27:51.692673  254667 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 10:27:51.692744  254667 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 10:27:51.692840  254667 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1115 10:27:51.761320  254667 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 10:27:51.761456  254667 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 10:27:51.761600  254667 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1115 10:27:51.927754  254667 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1115 10:27:51.930525  254667 out.go:252]   - Generating certificates and keys ...
	I1115 10:27:51.930661  254667 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 10:27:51.930777  254667 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 10:27:52.140117  254667 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 10:27:52.256187  254667 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 10:27:52.386520  254667 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 10:27:52.559761  254667 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 10:27:52.736556  254667 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 10:27:52.736726  254667 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-914881 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1115 10:27:52.827015  254667 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 10:27:52.827211  254667 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-914881 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1115 10:27:53.019232  254667 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 10:27:53.253702  254667 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 10:27:53.336002  254667 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 10:27:53.336085  254667 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 10:27:53.421057  254667 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 10:27:53.591346  254667 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 10:27:53.685695  254667 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 10:27:53.786443  254667 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 10:27:53.787201  254667 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 10:27:53.791788  254667 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1115 10:27:49.999544  253283 pod_ready.go:104] pod "coredns-66bc5c9577-8nbgb" is not "Ready", error: <nil>
	I1115 10:27:50.499397  253283 pod_ready.go:94] pod "coredns-66bc5c9577-8nbgb" is "Ready"
	I1115 10:27:50.499425  253283 pod_ready.go:86] duration metric: took 4.505831755s for pod "coredns-66bc5c9577-8nbgb" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:27:50.501755  253283 pod_ready.go:83] waiting for pod "etcd-pause-642487" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:27:50.505213  253283 pod_ready.go:94] pod "etcd-pause-642487" is "Ready"
	I1115 10:27:50.505232  253283 pod_ready.go:86] duration metric: took 3.458764ms for pod "etcd-pause-642487" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:27:50.507060  253283 pod_ready.go:83] waiting for pod "kube-apiserver-pause-642487" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 10:27:52.512599  253283 pod_ready.go:104] pod "kube-apiserver-pause-642487" is not "Ready", error: <nil>
	W1115 10:27:55.012760  253283 pod_ready.go:104] pod "kube-apiserver-pause-642487" is not "Ready", error: <nil>
	I1115 10:27:56.512499  253283 pod_ready.go:94] pod "kube-apiserver-pause-642487" is "Ready"
	I1115 10:27:56.512533  253283 pod_ready.go:86] duration metric: took 6.00544877s for pod "kube-apiserver-pause-642487" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:27:56.514778  253283 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-642487" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:27:56.518829  253283 pod_ready.go:94] pod "kube-controller-manager-pause-642487" is "Ready"
	I1115 10:27:56.518852  253283 pod_ready.go:86] duration metric: took 4.05134ms for pod "kube-controller-manager-pause-642487" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:27:56.520763  253283 pod_ready.go:83] waiting for pod "kube-proxy-jhknt" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:27:56.524616  253283 pod_ready.go:94] pod "kube-proxy-jhknt" is "Ready"
	I1115 10:27:56.524637  253283 pod_ready.go:86] duration metric: took 3.853942ms for pod "kube-proxy-jhknt" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:27:56.526524  253283 pod_ready.go:83] waiting for pod "kube-scheduler-pause-642487" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:27:56.711833  253283 pod_ready.go:94] pod "kube-scheduler-pause-642487" is "Ready"
	I1115 10:27:56.711863  253283 pod_ready.go:86] duration metric: took 185.314944ms for pod "kube-scheduler-pause-642487" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:27:56.711878  253283 pod_ready.go:40] duration metric: took 10.720860275s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:27:56.761368  253283 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:27:56.763472  253283 out.go:179] * Done! kubectl is now configured to use "pause-642487" cluster and "default" namespace by default
	I1115 10:27:53.793470  254667 out.go:252]   - Booting up control plane ...
	I1115 10:27:53.793587  254667 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 10:27:53.793699  254667 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 10:27:53.794263  254667 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 10:27:53.809602  254667 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 10:27:53.810412  254667 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 10:27:53.810492  254667 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 10:27:53.912862  254667 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	
	
	==> CRI-O <==
	Nov 15 10:27:38 pause-642487 crio[2298]: time="2025-11-15T10:27:38.157218844Z" level=info msg="Starting container: 0ef58fd5a90ee7a6981b544c76026e5556c7883732036cbb3edd93733ccf4a6c" id=3fc6f235-afd4-4baa-a689-7f90162f5495 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:27:38 pause-642487 crio[2298]: time="2025-11-15T10:27:38.157552138Z" level=info msg="Starting container: ad476564fba4d88d6b14f082586a864bcf39dadd4f5371f877dda5662a573cf6" id=a5a9aeb1-bee4-4bc8-ad6a-cc82ad540b1a name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:27:38 pause-642487 crio[2298]: time="2025-11-15T10:27:38.158032545Z" level=info msg="Started container" PID=2521 containerID=602dacd6f75382f88ebd4b70e2712c63d504c2104f09e69e7ed06953540d3bd6 description=kube-system/kindnet-jh5hv/kindnet-cni id=f799f1f8-2434-4890-badf-b336c316cb3e name=/runtime.v1.RuntimeService/StartContainer sandboxID=f3c1c95bc887b75d87a6cfd0ba3641aee12535d352b8f7ec34a007a01a0af04a
	Nov 15 10:27:38 pause-642487 crio[2298]: time="2025-11-15T10:27:38.158183959Z" level=info msg="Started container" PID=2501 containerID=fb8520106a10f5510cb789cf55fa41e2dcef15b4e49b1e8e695b835e5d20c27f description=kube-system/kube-controller-manager-pause-642487/kube-controller-manager id=1e6fd0e0-6786-4792-956a-9982698de34e name=/runtime.v1.RuntimeService/StartContainer sandboxID=344368da8364c593f8715163dc74a517d9801d1d0d212cf235d57311667695c2
	Nov 15 10:27:38 pause-642487 crio[2298]: time="2025-11-15T10:27:38.1601462Z" level=info msg="Created container 3d25753d2c98879b7a05bcb03f07c48e41dbace9c81399d0d21f0952ab2ca92b: kube-system/kube-proxy-jhknt/kube-proxy" id=4ed7f28c-4bf8-4541-b3be-ed25846ea6a6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:27:38 pause-642487 crio[2298]: time="2025-11-15T10:27:38.165264584Z" level=info msg="Started container" PID=2512 containerID=0ef58fd5a90ee7a6981b544c76026e5556c7883732036cbb3edd93733ccf4a6c description=kube-system/kube-apiserver-pause-642487/kube-apiserver id=3fc6f235-afd4-4baa-a689-7f90162f5495 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2bd1e8d174df222901341522574aacefce72ba747d59f30578fe6e69a06f3a21
	Nov 15 10:27:38 pause-642487 crio[2298]: time="2025-11-15T10:27:38.165599262Z" level=info msg="Started container" PID=2509 containerID=ad476564fba4d88d6b14f082586a864bcf39dadd4f5371f877dda5662a573cf6 description=kube-system/etcd-pause-642487/etcd id=a5a9aeb1-bee4-4bc8-ad6a-cc82ad540b1a name=/runtime.v1.RuntimeService/StartContainer sandboxID=49d8d7e8aa7b9cae28f09cefc2873c31c9f72b2bdc72a52acbf2f1258daeb3c8
	Nov 15 10:27:38 pause-642487 crio[2298]: time="2025-11-15T10:27:38.1660251Z" level=info msg="Starting container: 3d25753d2c98879b7a05bcb03f07c48e41dbace9c81399d0d21f0952ab2ca92b" id=c7676357-4a64-4aec-8405-976c3207eeba name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:27:38 pause-642487 crio[2298]: time="2025-11-15T10:27:38.166374336Z" level=info msg="Started container" PID=2528 containerID=79929c6a2bd5c66ee1f199a1d7d3bd6158f0914fb6ede8beb6a05b4c765efed2 description=kube-system/kube-scheduler-pause-642487/kube-scheduler id=e86b8d04-72ce-4b3e-b47f-402ee9eed324 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b77e08415993d81e6ca4abbfd1702e12a8dfc1ba16d575eb54d4485a48ce3af8
	Nov 15 10:27:38 pause-642487 crio[2298]: time="2025-11-15T10:27:38.173426468Z" level=info msg="Started container" PID=2532 containerID=3d25753d2c98879b7a05bcb03f07c48e41dbace9c81399d0d21f0952ab2ca92b description=kube-system/kube-proxy-jhknt/kube-proxy id=c7676357-4a64-4aec-8405-976c3207eeba name=/runtime.v1.RuntimeService/StartContainer sandboxID=7702971120d9b99825817dfaafd32f436e4d52f180683730eacab1fdeff88596
	Nov 15 10:27:38 pause-642487 crio[2298]: time="2025-11-15T10:27:38.173678737Z" level=info msg="Created container a5225c7077c6f279cc61d6fc15cdb423b982b29412dd8a173b4bdc1c976af30b: kube-system/coredns-66bc5c9577-8nbgb/coredns" id=00518024-9c37-40a7-b629-df1a1bc5c9ab name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:27:38 pause-642487 crio[2298]: time="2025-11-15T10:27:38.176054575Z" level=info msg="Starting container: a5225c7077c6f279cc61d6fc15cdb423b982b29412dd8a173b4bdc1c976af30b" id=8192e47d-e4cc-45d9-b907-0f443408cc33 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:27:38 pause-642487 crio[2298]: time="2025-11-15T10:27:38.18312234Z" level=info msg="Started container" PID=2542 containerID=a5225c7077c6f279cc61d6fc15cdb423b982b29412dd8a173b4bdc1c976af30b description=kube-system/coredns-66bc5c9577-8nbgb/coredns id=8192e47d-e4cc-45d9-b907-0f443408cc33 name=/runtime.v1.RuntimeService/StartContainer sandboxID=06739937e202986d610f705d73fe19afa5efb5d7d718935a90480d934a791875
	Nov 15 10:27:48 pause-642487 crio[2298]: time="2025-11-15T10:27:48.657011165Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:27:48 pause-642487 crio[2298]: time="2025-11-15T10:27:48.661775552Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:27:48 pause-642487 crio[2298]: time="2025-11-15T10:27:48.661805824Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:27:48 pause-642487 crio[2298]: time="2025-11-15T10:27:48.661834525Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:27:48 pause-642487 crio[2298]: time="2025-11-15T10:27:48.665441401Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:27:48 pause-642487 crio[2298]: time="2025-11-15T10:27:48.665465166Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:27:48 pause-642487 crio[2298]: time="2025-11-15T10:27:48.665488787Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:27:48 pause-642487 crio[2298]: time="2025-11-15T10:27:48.66901515Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:27:48 pause-642487 crio[2298]: time="2025-11-15T10:27:48.669039883Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:27:48 pause-642487 crio[2298]: time="2025-11-15T10:27:48.669060952Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:27:48 pause-642487 crio[2298]: time="2025-11-15T10:27:48.672343657Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:27:48 pause-642487 crio[2298]: time="2025-11-15T10:27:48.672369997Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	a5225c7077c6f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   22 seconds ago       Running             coredns                   1                   06739937e2029       coredns-66bc5c9577-8nbgb               kube-system
	79929c6a2bd5c       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   22 seconds ago       Running             kube-scheduler            1                   b77e08415993d       kube-scheduler-pause-642487            kube-system
	3d25753d2c988       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   22 seconds ago       Running             kube-proxy                1                   7702971120d9b       kube-proxy-jhknt                       kube-system
	602dacd6f7538       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   22 seconds ago       Running             kindnet-cni               1                   f3c1c95bc887b       kindnet-jh5hv                          kube-system
	0ef58fd5a90ee       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   22 seconds ago       Running             kube-apiserver            1                   2bd1e8d174df2       kube-apiserver-pause-642487            kube-system
	ad476564fba4d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   22 seconds ago       Running             etcd                      1                   49d8d7e8aa7b9       etcd-pause-642487                      kube-system
	fb8520106a10f       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   22 seconds ago       Running             kube-controller-manager   1                   344368da8364c       kube-controller-manager-pause-642487   kube-system
	9cc96cc9a3498       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   34 seconds ago       Exited              coredns                   0                   06739937e2029       coredns-66bc5c9577-8nbgb               kube-system
	8bc8fa4817aeb       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   About a minute ago   Exited              kindnet-cni               0                   f3c1c95bc887b       kindnet-jh5hv                          kube-system
	7816357c42fe9       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   About a minute ago   Exited              kube-proxy                0                   7702971120d9b       kube-proxy-jhknt                       kube-system
	f6ded8d1e4b36       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   About a minute ago   Exited              etcd                      0                   49d8d7e8aa7b9       etcd-pause-642487                      kube-system
	3bc6859c7b004       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   About a minute ago   Exited              kube-controller-manager   0                   344368da8364c       kube-controller-manager-pause-642487   kube-system
	212c7652d9dd7       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   About a minute ago   Exited              kube-apiserver            0                   2bd1e8d174df2       kube-apiserver-pause-642487            kube-system
	4e966f58ab607       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   About a minute ago   Exited              kube-scheduler            0                   b77e08415993d       kube-scheduler-pause-642487            kube-system
	
	
	==> coredns [9cc96cc9a3498c97b9e9f90a49f184e0c4c09a5789e8c22374b75eefad9909ec] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41396 - 39398 "HINFO IN 2315590844852274863.5946995717533965393. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.012707555s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a5225c7077c6f279cc61d6fc15cdb423b982b29412dd8a173b4bdc1c976af30b] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found]
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found]
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38955 - 10052 "HINFO IN 7011222262236218715.6389590110140029362. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.015638808s
	
	
	==> describe nodes <==
	Name:               pause-642487
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-642487
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=pause-642487
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_26_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:26:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-642487
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:27:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:27:25 +0000   Sat, 15 Nov 2025 10:26:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:27:25 +0000   Sat, 15 Nov 2025 10:26:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:27:25 +0000   Sat, 15 Nov 2025 10:26:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:27:25 +0000   Sat, 15 Nov 2025 10:27:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-642487
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                68100b6c-9438-4e63-91cd-fedd50e3a311
	  Boot ID:                    6a33f504-3a8c-4d49-8fc5-301cf5275efc
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-8nbgb                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     76s
	  kube-system                 etcd-pause-642487                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         82s
	  kube-system                 kindnet-jh5hv                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      76s
	  kube-system                 kube-apiserver-pause-642487             250m (3%)     0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 kube-controller-manager-pause-642487    200m (2%)     0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 kube-proxy-jhknt                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-scheduler-pause-642487             100m (1%)     0 (0%)      0 (0%)           0 (0%)         82s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 75s                kube-proxy       
	  Normal   Starting                 16s                kube-proxy       
	  Warning  CgroupV1                 88s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  88s (x8 over 88s)  kubelet          Node pause-642487 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    88s (x8 over 88s)  kubelet          Node pause-642487 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     88s (x8 over 88s)  kubelet          Node pause-642487 status is now: NodeHasSufficientPID
	  Normal   Starting                 82s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 82s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  81s                kubelet          Node pause-642487 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    81s                kubelet          Node pause-642487 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     81s                kubelet          Node pause-642487 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           77s                node-controller  Node pause-642487 event: Registered Node pause-642487 in Controller
	  Normal   NodeReady                35s                kubelet          Node pause-642487 status is now: NodeReady
	  Normal   RegisteredNode           12s                node-controller  Node pause-642487 event: Registered Node pause-642487 in Controller
	
	
	==> dmesg <==
	[Nov15 09:41] kmem.limit_in_bytes is deprecated and will be removed. Writing any value to this file has no effect. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov15 09:44] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[  +1.059558] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[  +1.023907] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[  +1.023868] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[  +1.023912] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[  +1.023925] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[  +2.047814] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[  +4.031639] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[  +8.127259] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[Nov15 09:45] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[ +32.253211] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	
	
	==> etcd [ad476564fba4d88d6b14f082586a864bcf39dadd4f5371f877dda5662a573cf6] <==
	{"level":"warn","ts":"2025-11-15T10:27:44.418507Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"186.240878ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:volume-scheduler\" limit:1 ","response":"range_response_count:1 size:725"}
	{"level":"info","ts":"2025-11-15T10:27:44.418586Z","caller":"traceutil/trace.go:172","msg":"trace[1882963809] linearizableReadLoop","detail":"{readStateIndex:503; appliedIndex:502; }","duration":"126.811592ms","start":"2025-11-15T10:27:44.291763Z","end":"2025-11-15T10:27:44.418575Z","steps":["trace[1882963809] 'read index received'  (duration: 46.42µs)","trace[1882963809] 'applied index is now lower than readState.Index'  (duration: 126.764556ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-15T10:27:44.418593Z","caller":"traceutil/trace.go:172","msg":"trace[539007305] range","detail":"{range_begin:/registry/clusterroles/system:volume-scheduler; range_end:; response_count:1; response_revision:477; }","duration":"186.342414ms","start":"2025-11-15T10:27:44.232238Z","end":"2025-11-15T10:27:44.418581Z","steps":["trace[539007305] 'agreement among raft nodes before linearized reading'  (duration: 59.564425ms)","trace[539007305] 'range keys from in-memory index tree'  (duration: 126.582575ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-15T10:27:44.419257Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"186.604363ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/etcd-pause-642487.1878274a2b4ddd4e\" limit:1 ","response":"range_response_count:1 size:804"}
	{"level":"info","ts":"2025-11-15T10:27:44.419326Z","caller":"traceutil/trace.go:172","msg":"trace[2025804408] range","detail":"{range_begin:/registry/events/kube-system/etcd-pause-642487.1878274a2b4ddd4e; range_end:; response_count:1; response_revision:477; }","duration":"186.666882ms","start":"2025-11-15T10:27:44.232631Z","end":"2025-11-15T10:27:44.419298Z","steps":["trace[2025804408] 'agreement among raft nodes before linearized reading'  (duration: 185.9811ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-15T10:27:44.608881Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.510957ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:certificates.k8s.io:kubelet-serving-approver\" limit:1 ","response":"range_response_count:1 size:684"}
	{"level":"info","ts":"2025-11-15T10:27:44.608949Z","caller":"traceutil/trace.go:172","msg":"trace[1605097997] range","detail":"{range_begin:/registry/clusterroles/system:certificates.k8s.io:kubelet-serving-approver; range_end:; response_count:1; response_revision:478; }","duration":"123.60061ms","start":"2025-11-15T10:27:44.485335Z","end":"2025-11-15T10:27:44.608936Z","steps":["trace[1605097997] 'agreement among raft nodes before linearized reading'  (duration: 62.770687ms)","trace[1605097997] 'range keys from in-memory index tree'  (duration: 60.635785ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-15T10:27:44.608883Z","caller":"traceutil/trace.go:172","msg":"trace[1972206687] transaction","detail":"{read_only:false; response_revision:479; number_of_response:1; }","duration":"186.184585ms","start":"2025-11-15T10:27:44.422678Z","end":"2025-11-15T10:27:44.608863Z","steps":["trace[1972206687] 'process raft request'  (duration: 125.470476ms)","trace[1972206687] 'compare'  (duration: 60.60663ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-15T10:27:44.862883Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"183.550982ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:kube-scheduler\" limit:1 ","response":"range_response_count:1 size:1835"}
	{"level":"warn","ts":"2025-11-15T10:27:44.862928Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.397729ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356660943256300 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-pause-642487.18782749e30878dc\" mod_revision:476 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-pause-642487.18782749e30878dc\" value_size:745 lease:6414984624088480390 >> failure:<request_range:<key:\"/registry/events/kube-system/kube-apiserver-pause-642487.18782749e30878dc\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-15T10:27:44.862970Z","caller":"traceutil/trace.go:172","msg":"trace[1151604462] range","detail":"{range_begin:/registry/clusterroles/system:kube-scheduler; range_end:; response_count:1; response_revision:480; }","duration":"183.628102ms","start":"2025-11-15T10:27:44.679304Z","end":"2025-11-15T10:27:44.862932Z","steps":["trace[1151604462] 'agreement among raft nodes before linearized reading'  (duration: 60.136831ms)","trace[1151604462] 'range keys from in-memory index tree'  (duration: 123.333731ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-15T10:27:44.863036Z","caller":"traceutil/trace.go:172","msg":"trace[152964354] transaction","detail":"{read_only:false; response_revision:481; number_of_response:1; }","duration":"184.96824ms","start":"2025-11-15T10:27:44.678047Z","end":"2025-11-15T10:27:44.863015Z","steps":["trace[152964354] 'process raft request'  (duration: 61.419277ms)","trace[152964354] 'compare'  (duration: 123.317437ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-15T10:27:45.062632Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.756469ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:controller:deployment-controller\" limit:1 ","response":"range_response_count:1 size:915"}
	{"level":"info","ts":"2025-11-15T10:27:45.062655Z","caller":"traceutil/trace.go:172","msg":"trace[1685650576] transaction","detail":"{read_only:false; response_revision:483; number_of_response:1; }","duration":"131.422385ms","start":"2025-11-15T10:27:44.931205Z","end":"2025-11-15T10:27:45.062627Z","steps":["trace[1685650576] 'process raft request'  (duration: 70.371666ms)","trace[1685650576] 'compare'  (duration: 60.923954ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-15T10:27:45.062698Z","caller":"traceutil/trace.go:172","msg":"trace[2098353046] range","detail":"{range_begin:/registry/clusterroles/system:controller:deployment-controller; range_end:; response_count:1; response_revision:482; }","duration":"129.839966ms","start":"2025-11-15T10:27:44.932844Z","end":"2025-11-15T10:27:45.062684Z","steps":["trace[2098353046] 'agreement among raft nodes before linearized reading'  (duration: 68.70765ms)","trace[2098353046] 'range keys from in-memory index tree'  (duration: 60.958742ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-15T10:27:45.343387Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"184.261824ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:controller:expand-controller\" limit:1 ","response":"range_response_count:1 size:801"}
	{"level":"info","ts":"2025-11-15T10:27:45.343467Z","caller":"traceutil/trace.go:172","msg":"trace[354462284] range","detail":"{range_begin:/registry/clusterroles/system:controller:expand-controller; range_end:; response_count:1; response_revision:484; }","duration":"184.354872ms","start":"2025-11-15T10:27:45.159097Z","end":"2025-11-15T10:27:45.343452Z","steps":["trace[354462284] 'agreement among raft nodes before linearized reading'  (duration: 60.737772ms)","trace[354462284] 'range keys from in-memory index tree'  (duration: 123.430916ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-15T10:27:45.343421Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.537669ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356660943256319 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns-66bc5c9577-8nbgb.1878274af3e37f2d\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-66bc5c9577-8nbgb.1878274af3e37f2d\" value_size:733 lease:6414984624088480390 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-15T10:27:45.343583Z","caller":"traceutil/trace.go:172","msg":"trace[357575132] transaction","detail":"{read_only:false; response_revision:485; number_of_response:1; }","duration":"185.507543ms","start":"2025-11-15T10:27:45.158063Z","end":"2025-11-15T10:27:45.343570Z","steps":["trace[357575132] 'process raft request'  (duration: 61.777329ms)","trace[357575132] 'compare'  (duration: 123.43916ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-15T10:27:45.542634Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.135638ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:controller:job-controller\" limit:1 ","response":"range_response_count:1 size:782"}
	{"level":"info","ts":"2025-11-15T10:27:45.542701Z","caller":"traceutil/trace.go:172","msg":"trace[1152570653] range","detail":"{range_begin:/registry/clusterroles/system:controller:job-controller; range_end:; response_count:1; response_revision:486; }","duration":"132.217924ms","start":"2025-11-15T10:27:45.410469Z","end":"2025-11-15T10:27:45.542687Z","steps":["trace[1152570653] 'agreement among raft nodes before linearized reading'  (duration: 73.232546ms)","trace[1152570653] 'range keys from in-memory index tree'  (duration: 58.804489ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-15T10:27:45.542913Z","caller":"traceutil/trace.go:172","msg":"trace[604292703] transaction","detail":"{read_only:false; response_revision:487; number_of_response:1; }","duration":"134.020621ms","start":"2025-11-15T10:27:45.408876Z","end":"2025-11-15T10:27:45.542897Z","steps":["trace[604292703] 'process raft request'  (duration: 74.866601ms)","trace[604292703] 'compare'  (duration: 59.034454ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-15T10:27:45.766930Z","caller":"traceutil/trace.go:172","msg":"trace[1166695952] transaction","detail":"{read_only:false; response_revision:489; number_of_response:1; }","duration":"157.178451ms","start":"2025-11-15T10:27:45.609733Z","end":"2025-11-15T10:27:45.766911Z","steps":["trace[1166695952] 'process raft request'  (duration: 58.97062ms)","trace[1166695952] 'compare'  (duration: 98.057642ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-15T10:27:45.766904Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"156.035557ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:controller:replication-controller\" limit:1 ","response":"range_response_count:1 size:830"}
	{"level":"info","ts":"2025-11-15T10:27:45.767045Z","caller":"traceutil/trace.go:172","msg":"trace[466302424] range","detail":"{range_begin:/registry/clusterroles/system:controller:replication-controller; range_end:; response_count:1; response_revision:488; }","duration":"156.188567ms","start":"2025-11-15T10:27:45.610840Z","end":"2025-11-15T10:27:45.767029Z","steps":["trace[466302424] 'agreement among raft nodes before linearized reading'  (duration: 57.838344ms)","trace[466302424] 'range keys from in-memory index tree'  (duration: 98.09122ms)"],"step_count":2}
	
	
	==> etcd [f6ded8d1e4b3632bdec1a6e539a1f29b3c4950da7cb62c97ca951715e1ec6888] <==
	{"level":"warn","ts":"2025-11-15T10:26:35.533535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:26:35.543811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:26:35.557137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:26:35.572813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:26:35.580742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:26:35.586855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:26:35.627806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57814","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-15T10:27:30.603337Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-15T10:27:30.603431Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-642487","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-11-15T10:27:30.603525Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-15T10:27:32.690707Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-15T10:27:32.690811Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T10:27:32.690870Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-11-15T10:27:32.690924Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-15T10:27:32.690915Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-15T10:27:32.690915Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-15T10:27:32.690988Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-15T10:27:32.691010Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-15T10:27:32.690974Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-15T10:27:32.691033Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T10:27:32.690977Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-15T10:27:32.692632Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-11-15T10:27:32.692706Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T10:27:32.692742Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-15T10:27:32.692756Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-642487","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 10:28:00 up  2:10,  0 user,  load average: 3.18, 1.79, 1.27
	Linux pause-642487 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [602dacd6f75382f88ebd4b70e2712c63d504c2104f09e69e7ed06953540d3bd6] <==
	I1115 10:27:38.360572       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:27:38.361038       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1115 10:27:38.361247       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:27:38.361265       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:27:38.361288       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:27:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:27:38.656876       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:27:38.756577       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:27:38.756619       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:27:38.756746       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1115 10:27:42.562425       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"networkpolicies\" in API group \"networking.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"kindnet\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found]" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1115 10:27:42.562790       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"kindnet\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1115 10:27:42.563251       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"pods\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"kindnet\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found]" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1115 10:27:43.856726       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:27:43.856773       1 metrics.go:72] Registering metrics
	I1115 10:27:43.856852       1 controller.go:711] "Syncing nftables rules"
	I1115 10:27:48.656566       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 10:27:48.656636       1 main.go:301] handling current node
	I1115 10:27:58.658026       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 10:27:58.658060       1 main.go:301] handling current node
	
	
	==> kindnet [8bc8fa4817aeb775ecf14d85b0146af1da01b6d0d4276cb3d4f7730073d8be92] <==
	I1115 10:26:44.956855       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:26:44.958991       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1115 10:26:44.959574       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:26:44.959592       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:26:44.959614       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:26:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:26:45.211811       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:26:45.211833       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:26:45.211845       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:26:45.212185       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1115 10:27:15.212576       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1115 10:27:15.212578       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1115 10:27:15.212609       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1115 10:27:15.212578       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1115 10:27:16.512863       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:27:16.512888       1 metrics.go:72] Registering metrics
	I1115 10:27:16.512999       1 controller.go:711] "Syncing nftables rules"
	I1115 10:27:25.216350       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 10:27:25.216410       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0ef58fd5a90ee7a6981b544c76026e5556c7883732036cbb3edd93733ccf4a6c] <==
	I1115 10:27:42.464236       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1115 10:27:42.464575       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1115 10:27:42.465290       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 10:27:42.464592       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1115 10:27:42.472373       1 aggregator.go:171] initial CRD sync complete...
	I1115 10:27:42.472391       1 autoregister_controller.go:144] Starting autoregister controller
	I1115 10:27:42.472399       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 10:27:42.472405       1 cache.go:39] Caches are synced for autoregister controller
	I1115 10:27:42.480524       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1115 10:27:42.480899       1 policy_source.go:240] refreshing policies
	I1115 10:27:42.562450       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1115 10:27:42.562900       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1115 10:27:42.565637       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1115 10:27:42.569682       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1115 10:27:42.566738       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1115 10:27:42.570877       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1115 10:27:42.572922       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 10:27:42.576919       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1115 10:27:42.830673       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1115 10:27:43.352970       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:27:46.897933       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 10:27:48.244647       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 10:27:48.493469       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:27:48.545580       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 10:27:48.643631       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [212c7652d9dd7a0609b346e2da303aaeeb5bbe32e002bba5a7822c363e37bab1] <==
	W1115 10:27:31.608085       1 logging.go:55] [core] [Channel #9 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.608106       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.608136       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.608141       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.608146       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.608149       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.608151       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.608173       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.608276       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.609486       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.609656       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.609679       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.609694       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.609704       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.609664       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.609786       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.609741       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.609747       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.609883       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.609896       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.609902       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.609909       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.609964       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.609995       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.610104       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [3bc6859c7b0047c536c2b2ef1f446ac2fc769d38f898f17f61d309bf77faba98] <==
	I1115 10:26:43.201635       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 10:26:43.201699       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 10:26:43.201789       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1115 10:26:43.202763       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1115 10:26:43.202813       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1115 10:26:43.202824       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1115 10:26:43.203171       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1115 10:26:43.203347       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1115 10:26:43.203369       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1115 10:26:43.203428       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1115 10:26:43.203457       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1115 10:26:43.204922       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1115 10:26:43.205029       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:26:43.205044       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1115 10:26:43.205081       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1115 10:26:43.205263       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1115 10:26:43.205715       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1115 10:26:43.206652       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 10:26:43.209635       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1115 10:26:43.211775       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1115 10:26:43.300651       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:26:43.300668       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 10:26:43.300674       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1115 10:26:43.310191       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:27:28.156864       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [fb8520106a10f5510cb789cf55fa41e2dcef15b4e49b1e8e695b835e5d20c27f] <==
	I1115 10:27:48.239622       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:27:48.239643       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 10:27:48.239653       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1115 10:27:48.239632       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1115 10:27:48.239813       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1115 10:27:48.239858       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 10:27:48.240252       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1115 10:27:48.240284       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1115 10:27:48.240324       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 10:27:48.240290       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-642487"
	I1115 10:27:48.240506       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1115 10:27:48.241480       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1115 10:27:48.243564       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1115 10:27:48.245180       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1115 10:27:48.245225       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1115 10:27:48.245267       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1115 10:27:48.245283       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1115 10:27:48.245289       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1115 10:27:48.245327       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:27:48.245353       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1115 10:27:48.250907       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1115 10:27:48.254142       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 10:27:48.256366       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 10:27:48.259607       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 10:27:48.263839       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [3d25753d2c98879b7a05bcb03f07c48e41dbace9c81399d0d21f0952ab2ca92b] <==
	I1115 10:27:38.456797       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:27:38.588910       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1115 10:27:42.569318       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"pause-642487\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1115 10:27:44.189480       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:27:44.189515       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1115 10:27:44.189598       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:27:44.208199       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:27:44.208249       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:27:44.213740       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:27:44.214108       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:27:44.214140       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:27:44.215650       1 config.go:200] "Starting service config controller"
	I1115 10:27:44.215676       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:27:44.215685       1 config.go:309] "Starting node config controller"
	I1115 10:27:44.215714       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:27:44.215722       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:27:44.215739       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:27:44.215754       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:27:44.215777       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:27:44.215782       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:27:44.316038       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 10:27:44.316077       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 10:27:44.316106       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [7816357c42fe9c2750e08cc8de3eaa7ada90dbdcaa66ef57679625b7d186d0b6] <==
	I1115 10:26:44.801084       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:26:44.961260       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:26:45.061554       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:26:45.061601       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1115 10:26:45.061715       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:26:45.094386       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:26:45.094466       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:26:45.102862       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:26:45.104412       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:26:45.104498       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:26:45.106564       1 config.go:309] "Starting node config controller"
	I1115 10:26:45.106867       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:26:45.107107       1 config.go:200] "Starting service config controller"
	I1115 10:26:45.107129       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:26:45.107408       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:26:45.107456       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:26:45.108979       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:26:45.109004       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:26:45.207640       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 10:26:45.207737       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 10:26:45.207770       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:26:45.210031       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4e966f58ab607d5bbe97198f865699126b593ea8fce93b6ac7fa51dbf052e144] <==
	E1115 10:26:36.405475       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 10:26:36.405545       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 10:26:36.405617       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 10:26:36.404532       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 10:26:36.405684       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 10:26:36.405691       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 10:26:36.404901       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1115 10:26:36.406465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 10:26:36.406690       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 10:26:37.230174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 10:26:37.243154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 10:26:37.265299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 10:26:37.301613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 10:26:37.303478       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 10:26:37.341544       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1115 10:26:37.376966       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 10:26:37.457178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 10:26:37.484666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1115 10:26:39.100746       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:27:30.604464       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1115 10:27:30.604485       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:27:30.605483       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1115 10:27:30.607067       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1115 10:27:30.607135       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1115 10:27:30.607202       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [79929c6a2bd5c66ee1f199a1d7d3bd6158f0914fb6ede8beb6a05b4c765efed2] <==
	I1115 10:27:39.669714       1 serving.go:386] Generated self-signed cert in-memory
	W1115 10:27:42.374602       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1115 10:27:42.374706       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1115 10:27:42.374740       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1115 10:27:42.374766       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1115 10:27:42.480379       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1115 10:27:42.480464       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:27:42.556141       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:27:42.556278       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:27:42.559073       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 10:27:42.559740       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 10:27:42.657152       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 10:27:38 pause-642487 kubelet[1402]: E1115 10:27:38.017075    1402 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-642487\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="bbc3b022e46d3c086f435225c4e0e99e" pod="kube-system/kube-apiserver-pause-642487"
	Nov 15 10:27:38 pause-642487 kubelet[1402]: E1115 10:27:38.017353    1402 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jhknt\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="66848a4f-7a86-4b64-adb1-2ebb61ff9ddc" pod="kube-system/kube-proxy-jhknt"
	Nov 15 10:27:38 pause-642487 kubelet[1402]: E1115 10:27:38.017642    1402 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-jh5hv\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="17e3aa21-2ac9-4ce2-9a63-54e13281bde5" pod="kube-system/kindnet-jh5hv"
	Nov 15 10:27:38 pause-642487 kubelet[1402]: I1115 10:27:38.017897    1402 scope.go:117] "RemoveContainer" containerID="9cc96cc9a3498c97b9e9f90a49f184e0c4c09a5789e8c22374b75eefad9909ec"
	Nov 15 10:27:38 pause-642487 kubelet[1402]: E1115 10:27:38.017891    1402 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-642487\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="46fb057cdc1d9b8324e561dc35393527" pod="kube-system/kube-scheduler-pause-642487"
	Nov 15 10:27:38 pause-642487 kubelet[1402]: E1115 10:27:38.018230    1402 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-642487\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="9cca22b901a4b9e18450d56334cf0a17" pod="kube-system/kube-controller-manager-pause-642487"
	Nov 15 10:27:38 pause-642487 kubelet[1402]: E1115 10:27:38.018507    1402 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-642487\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="bbc3b022e46d3c086f435225c4e0e99e" pod="kube-system/kube-apiserver-pause-642487"
	Nov 15 10:27:38 pause-642487 kubelet[1402]: E1115 10:27:38.018765    1402 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jhknt\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="66848a4f-7a86-4b64-adb1-2ebb61ff9ddc" pod="kube-system/kube-proxy-jhknt"
	Nov 15 10:27:38 pause-642487 kubelet[1402]: E1115 10:27:38.019006    1402 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-jh5hv\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="17e3aa21-2ac9-4ce2-9a63-54e13281bde5" pod="kube-system/kindnet-jh5hv"
	Nov 15 10:27:38 pause-642487 kubelet[1402]: E1115 10:27:38.019317    1402 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-8nbgb\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="9f4b3526-3889-4bc5-81e0-cbab60c70c2d" pod="kube-system/coredns-66bc5c9577-8nbgb"
	Nov 15 10:27:38 pause-642487 kubelet[1402]: E1115 10:27:38.019580    1402 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-642487\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="46fb057cdc1d9b8324e561dc35393527" pod="kube-system/kube-scheduler-pause-642487"
	Nov 15 10:27:38 pause-642487 kubelet[1402]: E1115 10:27:38.019849    1402 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-642487\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="26a9d5105ab43ff6347035d744bf86c1" pod="kube-system/etcd-pause-642487"
	Nov 15 10:27:42 pause-642487 kubelet[1402]: E1115 10:27:42.272870    1402 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-642487\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-642487' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Nov 15 10:27:42 pause-642487 kubelet[1402]: E1115 10:27:42.272887    1402 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-642487\" is forbidden: User \"system:node:pause-642487\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-642487' and this object" podUID="46fb057cdc1d9b8324e561dc35393527" pod="kube-system/kube-scheduler-pause-642487"
	Nov 15 10:27:42 pause-642487 kubelet[1402]: E1115 10:27:42.274200    1402 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-642487\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-642487' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 15 10:27:42 pause-642487 kubelet[1402]: E1115 10:27:42.381611    1402 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-642487\" is forbidden: User \"system:node:pause-642487\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-642487' and this object" podUID="26a9d5105ab43ff6347035d744bf86c1" pod="kube-system/etcd-pause-642487"
	Nov 15 10:27:42 pause-642487 kubelet[1402]: E1115 10:27:42.456134    1402 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-642487\" is forbidden: User \"system:node:pause-642487\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-642487' and this object" podUID="9cca22b901a4b9e18450d56334cf0a17" pod="kube-system/kube-controller-manager-pause-642487"
	Nov 15 10:27:42 pause-642487 kubelet[1402]: E1115 10:27:42.459799    1402 status_manager.go:1018] "Failed to get status for pod" err=<
	Nov 15 10:27:42 pause-642487 kubelet[1402]:         pods "kube-apiserver-pause-642487" is forbidden: User "system:node:pause-642487" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-642487' and this object
	Nov 15 10:27:42 pause-642487 kubelet[1402]:         RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found]
	Nov 15 10:27:42 pause-642487 kubelet[1402]:  > podUID="bbc3b022e46d3c086f435225c4e0e99e" pod="kube-system/kube-apiserver-pause-642487"
	Nov 15 10:27:49 pause-642487 kubelet[1402]: W1115 10:27:49.076185    1402 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 15 10:27:57 pause-642487 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 10:27:57 pause-642487 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 10:27:57 pause-642487 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-642487 -n pause-642487
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-642487 -n pause-642487: exit status 2 (395.668761ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-642487 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-642487
helpers_test.go:243: (dbg) docker inspect pause-642487:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "edc8640ba52ff151e06a6fb32b36e92157bdd4731a041218ada6f1b739d56fba",
	        "Created": "2025-11-15T10:26:21.921151466Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 236220,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:26:21.969161358Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/edc8640ba52ff151e06a6fb32b36e92157bdd4731a041218ada6f1b739d56fba/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/edc8640ba52ff151e06a6fb32b36e92157bdd4731a041218ada6f1b739d56fba/hostname",
	        "HostsPath": "/var/lib/docker/containers/edc8640ba52ff151e06a6fb32b36e92157bdd4731a041218ada6f1b739d56fba/hosts",
	        "LogPath": "/var/lib/docker/containers/edc8640ba52ff151e06a6fb32b36e92157bdd4731a041218ada6f1b739d56fba/edc8640ba52ff151e06a6fb32b36e92157bdd4731a041218ada6f1b739d56fba-json.log",
	        "Name": "/pause-642487",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-642487:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-642487",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "edc8640ba52ff151e06a6fb32b36e92157bdd4731a041218ada6f1b739d56fba",
	                "LowerDir": "/var/lib/docker/overlay2/9859fbb95620510513364cc8ff34c8aecf3a8a86ed019d2be4d7c2fb2b4c14ce-init/diff:/var/lib/docker/overlay2/507c85b6fb43d9c98216fac79d7a8d08dd20b2a63d0fbfc758336e5d38c04044/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9859fbb95620510513364cc8ff34c8aecf3a8a86ed019d2be4d7c2fb2b4c14ce/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9859fbb95620510513364cc8ff34c8aecf3a8a86ed019d2be4d7c2fb2b4c14ce/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9859fbb95620510513364cc8ff34c8aecf3a8a86ed019d2be4d7c2fb2b4c14ce/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-642487",
	                "Source": "/var/lib/docker/volumes/pause-642487/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-642487",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-642487",
	                "name.minikube.sigs.k8s.io": "pause-642487",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c8237d2134376614d24fe570567f8c9bd2890ed695cad23e9198bfb5365f2fae",
	            "SandboxKey": "/var/run/docker/netns/c8237d213437",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32974"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32975"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32978"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32976"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32977"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-642487": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "45fa74c79adc443d7e7679f83553aa89d1028f6c56e4ba6acaf65b07e5eda1b8",
	                    "EndpointID": "70ea5df1a669f8f39338209acc6b047769e5478c78e5242adb2e8eb5d47b718e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "8a:1c:92:f8:8e:b8",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-642487",
	                        "edc8640ba52f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-642487 -n pause-642487
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-642487 -n pause-642487: exit status 2 (403.323877ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-642487 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-642487 logs -n 25: (1.496113494s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-931243 sudo systemctl cat cri-docker --no-pager                                                                                │ cilium-931243             │ jenkins │ v1.37.0 │ 15 Nov 25 10:26 UTC │                     │
	│ ssh     │ -p cilium-931243 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                           │ cilium-931243             │ jenkins │ v1.37.0 │ 15 Nov 25 10:26 UTC │                     │
	│ ssh     │ -p cilium-931243 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                     │ cilium-931243             │ jenkins │ v1.37.0 │ 15 Nov 25 10:26 UTC │                     │
	│ ssh     │ -p cilium-931243 sudo cri-dockerd --version                                                                                              │ cilium-931243             │ jenkins │ v1.37.0 │ 15 Nov 25 10:26 UTC │                     │
	│ ssh     │ -p cilium-931243 sudo systemctl status containerd --all --full --no-pager                                                                │ cilium-931243             │ jenkins │ v1.37.0 │ 15 Nov 25 10:26 UTC │                     │
	│ ssh     │ -p cilium-931243 sudo systemctl cat containerd --no-pager                                                                                │ cilium-931243             │ jenkins │ v1.37.0 │ 15 Nov 25 10:26 UTC │                     │
	│ ssh     │ -p cilium-931243 sudo cat /lib/systemd/system/containerd.service                                                                         │ cilium-931243             │ jenkins │ v1.37.0 │ 15 Nov 25 10:26 UTC │                     │
	│ ssh     │ -p cilium-931243 sudo cat /etc/containerd/config.toml                                                                                    │ cilium-931243             │ jenkins │ v1.37.0 │ 15 Nov 25 10:26 UTC │                     │
	│ ssh     │ -p cilium-931243 sudo containerd config dump                                                                                             │ cilium-931243             │ jenkins │ v1.37.0 │ 15 Nov 25 10:26 UTC │                     │
	│ ssh     │ -p cilium-931243 sudo systemctl status crio --all --full --no-pager                                                                      │ cilium-931243             │ jenkins │ v1.37.0 │ 15 Nov 25 10:26 UTC │                     │
	│ ssh     │ -p cilium-931243 sudo systemctl cat crio --no-pager                                                                                      │ cilium-931243             │ jenkins │ v1.37.0 │ 15 Nov 25 10:26 UTC │                     │
	│ ssh     │ -p cilium-931243 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                            │ cilium-931243             │ jenkins │ v1.37.0 │ 15 Nov 25 10:26 UTC │                     │
	│ ssh     │ -p cilium-931243 sudo crio config                                                                                                        │ cilium-931243             │ jenkins │ v1.37.0 │ 15 Nov 25 10:26 UTC │                     │
	│ delete  │ -p cilium-931243                                                                                                                         │ cilium-931243             │ jenkins │ v1.37.0 │ 15 Nov 25 10:26 UTC │ 15 Nov 25 10:26 UTC │
	│ start   │ -p stopped-upgrade-567029 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-567029    │ jenkins │ v1.32.0 │ 15 Nov 25 10:27 UTC │                     │
	│ ssh     │ -p NoKubernetes-855068 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-855068       │ jenkins │ v1.37.0 │ 15 Nov 25 10:27 UTC │                     │
	│ stop    │ -p NoKubernetes-855068                                                                                                                   │ NoKubernetes-855068       │ jenkins │ v1.37.0 │ 15 Nov 25 10:27 UTC │ 15 Nov 25 10:27 UTC │
	│ start   │ -p NoKubernetes-855068 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-855068       │ jenkins │ v1.37.0 │ 15 Nov 25 10:27 UTC │ 15 Nov 25 10:27 UTC │
	│ ssh     │ -p NoKubernetes-855068 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-855068       │ jenkins │ v1.37.0 │ 15 Nov 25 10:27 UTC │                     │
	│ delete  │ -p NoKubernetes-855068                                                                                                                   │ NoKubernetes-855068       │ jenkins │ v1.37.0 │ 15 Nov 25 10:27 UTC │ 15 Nov 25 10:27 UTC │
	│ start   │ -p missing-upgrade-229925 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-229925    │ jenkins │ v1.32.0 │ 15 Nov 25 10:27 UTC │                     │
	│ start   │ -p pause-642487 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-642487              │ jenkins │ v1.37.0 │ 15 Nov 25 10:27 UTC │ 15 Nov 25 10:27 UTC │
	│ delete  │ -p offline-crio-637291                                                                                                                   │ offline-crio-637291       │ jenkins │ v1.37.0 │ 15 Nov 25 10:27 UTC │ 15 Nov 25 10:27 UTC │
	│ start   │ -p kubernetes-upgrade-914881 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-914881 │ jenkins │ v1.37.0 │ 15 Nov 25 10:27 UTC │                     │
	│ pause   │ -p pause-642487 --alsologtostderr -v=5                                                                                                   │ pause-642487              │ jenkins │ v1.37.0 │ 15 Nov 25 10:27 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:27:38
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:27:38.134995  254667 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:27:38.135262  254667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:27:38.135271  254667 out.go:374] Setting ErrFile to fd 2...
	I1115 10:27:38.135275  254667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:27:38.135473  254667 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 10:27:38.135972  254667 out.go:368] Setting JSON to false
	I1115 10:27:38.136988  254667 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":7795,"bootTime":1763194663,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 10:27:38.137085  254667 start.go:143] virtualization: kvm guest
	I1115 10:27:38.138976  254667 out.go:179] * [kubernetes-upgrade-914881] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 10:27:38.140169  254667 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 10:27:38.140182  254667 notify.go:221] Checking for updates...
	I1115 10:27:38.142728  254667 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:27:38.144088  254667 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:27:38.145205  254667 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-55448/.minikube
	I1115 10:27:38.146421  254667 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 10:27:38.147382  254667 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:27:38.148825  254667 config.go:182] Loaded profile config "missing-upgrade-229925": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1115 10:27:38.148942  254667 config.go:182] Loaded profile config "pause-642487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:27:38.149046  254667 config.go:182] Loaded profile config "stopped-upgrade-567029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1115 10:27:38.149160  254667 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:27:38.189202  254667 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 10:27:38.189373  254667 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:27:38.245155  254667 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:56 SystemTime:2025-11-15 10:27:38.235518498 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:27:38.245269  254667 docker.go:319] overlay module found
	I1115 10:27:38.246993  254667 out.go:179] * Using the docker driver based on user configuration
	I1115 10:27:38.248229  254667 start.go:309] selected driver: docker
	I1115 10:27:38.248245  254667 start.go:930] validating driver "docker" against <nil>
	I1115 10:27:38.248257  254667 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:27:38.249153  254667 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:27:38.323500  254667 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:56 SystemTime:2025-11-15 10:27:38.314284648 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:27:38.323676  254667 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 10:27:38.323875  254667 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1115 10:27:38.325769  254667 out.go:179] * Using Docker driver with root privileges
	I1115 10:27:38.326798  254667 cni.go:84] Creating CNI manager for ""
	I1115 10:27:38.326861  254667 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:27:38.326872  254667 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 10:27:38.326935  254667 start.go:353] cluster config:
	{Name:kubernetes-upgrade-914881 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-914881 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:27:38.328195  254667 out.go:179] * Starting "kubernetes-upgrade-914881" primary control-plane node in "kubernetes-upgrade-914881" cluster
	I1115 10:27:38.329182  254667 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:27:38.330133  254667 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:27:38.331029  254667 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1115 10:27:38.331057  254667 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1115 10:27:38.331073  254667 cache.go:65] Caching tarball of preloaded images
	I1115 10:27:38.331143  254667 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:27:38.331160  254667 preload.go:238] Found /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 10:27:38.331178  254667 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1115 10:27:38.331286  254667 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/config.json ...
	I1115 10:27:38.331302  254667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/config.json: {Name:mkc062520ed9eead4ff3381037c44d504ca62a35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:27:38.350240  254667 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:27:38.350261  254667 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:27:38.350288  254667 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:27:38.350335  254667 start.go:360] acquireMachinesLock for kubernetes-upgrade-914881: {Name:mkc7cac26c6de5f12a63525aff7e026bda3aca7a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:27:38.350444  254667 start.go:364] duration metric: took 88.286µs to acquireMachinesLock for "kubernetes-upgrade-914881"
	I1115 10:27:38.350480  254667 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-914881 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-914881 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:27:38.350556  254667 start.go:125] createHost starting for "" (driver="docker")
	I1115 10:27:36.093334  252977 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.42 as a tarball
	I1115 10:27:36.093350  252977 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.42 from local cache
	I1115 10:27:37.575842  252977 cache.go:168] failed to download gcr.io/k8s-minikube/kicbase:v0.0.42, will try fallback image if available: error loading image: Error response from daemon: client version 1.43 is too old. Minimum supported API version is 1.44, please upgrade your client to a newer version
	I1115 10:27:37.575853  252977 image.go:79] Checking for docker.io/kicbase/stable:v0.0.42 in local docker daemon
	I1115 10:27:37.596645  252977 cache.go:149] Downloading docker.io/kicbase/stable:v0.0.42 to local cache
	I1115 10:27:37.596851  252977 image.go:63] Checking for docker.io/kicbase/stable:v0.0.42 in local cache directory
	I1115 10:27:37.596881  252977 image.go:118] Writing docker.io/kicbase/stable:v0.0.42 to local cache
	I1115 10:27:37.554782  253283 cli_runner.go:164] Run: docker network inspect pause-642487 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:27:37.572781  253283 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1115 10:27:37.577582  253283 kubeadm.go:884] updating cluster {Name:pause-642487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-642487 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:27:37.577759  253283 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:27:37.577813  253283 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:27:37.614088  253283 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:27:37.614111  253283 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:27:37.614166  253283 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:27:37.641539  253283 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:27:37.641574  253283 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:27:37.641585  253283 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1115 10:27:37.641715  253283 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-642487 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-642487 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:27:37.641858  253283 ssh_runner.go:195] Run: crio config
	I1115 10:27:37.690462  253283 cni.go:84] Creating CNI manager for ""
	I1115 10:27:37.690484  253283 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:27:37.690502  253283 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:27:37.690523  253283 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-642487 NodeName:pause-642487 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:27:37.690635  253283 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-642487"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:27:37.690694  253283 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:27:37.698812  253283 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:27:37.698886  253283 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:27:37.706583  253283 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1115 10:27:37.719419  253283 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:27:37.731798  253283 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1115 10:27:37.744098  253283 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:27:37.747771  253283 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:27:37.844646  253283 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:27:37.859000  253283 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/pause-642487 for IP: 192.168.76.2
	I1115 10:27:37.859023  253283 certs.go:195] generating shared ca certs ...
	I1115 10:27:37.859039  253283 certs.go:227] acquiring lock for ca certs: {Name:mkec8c0ab2b564f4fafe1f7fc86089efa994f6db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:27:37.859243  253283 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key
	I1115 10:27:37.859291  253283 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key
	I1115 10:27:37.859301  253283 certs.go:257] generating profile certs ...
	I1115 10:27:37.859379  253283 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/pause-642487/client.key
	I1115 10:27:37.859433  253283 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/pause-642487/apiserver.key.164c5544
	I1115 10:27:37.859466  253283 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/pause-642487/proxy-client.key
	I1115 10:27:37.859559  253283 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem (1338 bytes)
	W1115 10:27:37.859587  253283 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962_empty.pem, impossibly tiny 0 bytes
	I1115 10:27:37.859596  253283 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:27:37.859625  253283 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:27:37.859646  253283 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:27:37.859667  253283 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem (1679 bytes)
	I1115 10:27:37.859703  253283 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:27:37.860410  253283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:27:37.902780  253283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:27:37.921342  253283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:27:37.938781  253283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:27:37.955935  253283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/pause-642487/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1115 10:27:37.972651  253283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/pause-642487/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 10:27:37.990607  253283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/pause-642487/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:27:38.011467  253283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/pause-642487/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:27:38.062701  253283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /usr/share/ca-certificates/589622.pem (1708 bytes)
	I1115 10:27:38.157877  253283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:27:38.275682  253283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem --> /usr/share/ca-certificates/58962.pem (1338 bytes)
	I1115 10:27:38.375283  253283 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:27:38.462546  253283 ssh_runner.go:195] Run: openssl version
	I1115 10:27:38.471708  253283 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/589622.pem && ln -fs /usr/share/ca-certificates/589622.pem /etc/ssl/certs/589622.pem"
	I1115 10:27:38.483846  253283 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/589622.pem
	I1115 10:27:38.489014  253283 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:47 /usr/share/ca-certificates/589622.pem
	I1115 10:27:38.489126  253283 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/589622.pem
	I1115 10:27:38.587995  253283 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/589622.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:27:38.661038  253283 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:27:38.675786  253283 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:27:38.680146  253283 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:41 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:27:38.680199  253283 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:27:38.782245  253283 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:27:38.793941  253283 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/58962.pem && ln -fs /usr/share/ca-certificates/58962.pem /etc/ssl/certs/58962.pem"
	I1115 10:27:38.864858  253283 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/58962.pem
	I1115 10:27:38.869050  253283 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:47 /usr/share/ca-certificates/58962.pem
	I1115 10:27:38.869111  253283 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/58962.pem
	I1115 10:27:38.975335  253283 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/58962.pem /etc/ssl/certs/51391683.0"
	I1115 10:27:38.985622  253283 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:27:39.057166  253283 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 10:27:39.187706  253283 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 10:27:39.293663  253283 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 10:27:39.466575  253283 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 10:27:39.568779  253283 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 10:27:39.680429  253283 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 10:27:39.785676  253283 kubeadm.go:401] StartCluster: {Name:pause-642487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-642487 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:27:39.786136  253283 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:27:39.786233  253283 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:27:39.881968  253283 cri.go:89] found id: "a5225c7077c6f279cc61d6fc15cdb423b982b29412dd8a173b4bdc1c976af30b"
	I1115 10:27:39.882055  253283 cri.go:89] found id: "79929c6a2bd5c66ee1f199a1d7d3bd6158f0914fb6ede8beb6a05b4c765efed2"
	I1115 10:27:39.882062  253283 cri.go:89] found id: "3d25753d2c98879b7a05bcb03f07c48e41dbace9c81399d0d21f0952ab2ca92b"
	I1115 10:27:39.882067  253283 cri.go:89] found id: "602dacd6f75382f88ebd4b70e2712c63d504c2104f09e69e7ed06953540d3bd6"
	I1115 10:27:39.882071  253283 cri.go:89] found id: "0ef58fd5a90ee7a6981b544c76026e5556c7883732036cbb3edd93733ccf4a6c"
	I1115 10:27:39.882076  253283 cri.go:89] found id: "ad476564fba4d88d6b14f082586a864bcf39dadd4f5371f877dda5662a573cf6"
	I1115 10:27:39.882080  253283 cri.go:89] found id: "fb8520106a10f5510cb789cf55fa41e2dcef15b4e49b1e8e695b835e5d20c27f"
	I1115 10:27:39.882084  253283 cri.go:89] found id: "9cc96cc9a3498c97b9e9f90a49f184e0c4c09a5789e8c22374b75eefad9909ec"
	I1115 10:27:39.882088  253283 cri.go:89] found id: "8bc8fa4817aeb775ecf14d85b0146af1da01b6d0d4276cb3d4f7730073d8be92"
	I1115 10:27:39.882098  253283 cri.go:89] found id: "7816357c42fe9c2750e08cc8de3eaa7ada90dbdcaa66ef57679625b7d186d0b6"
	I1115 10:27:39.882102  253283 cri.go:89] found id: "f6ded8d1e4b3632bdec1a6e539a1f29b3c4950da7cb62c97ca951715e1ec6888"
	I1115 10:27:39.882106  253283 cri.go:89] found id: "3bc6859c7b0047c536c2b2ef1f446ac2fc769d38f898f17f61d309bf77faba98"
	I1115 10:27:39.882110  253283 cri.go:89] found id: "212c7652d9dd7a0609b346e2da303aaeeb5bbe32e002bba5a7822c363e37bab1"
	I1115 10:27:39.882132  253283 cri.go:89] found id: "4e966f58ab607d5bbe97198f865699126b593ea8fce93b6ac7fa51dbf052e144"
	I1115 10:27:39.882136  253283 cri.go:89] found id: ""
	I1115 10:27:39.882183  253283 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 10:27:39.900176  253283 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:27:39Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:27:39.900259  253283 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:27:39.964647  253283 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 10:27:39.964670  253283 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 10:27:39.964808  253283 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 10:27:39.975584  253283 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:27:39.976348  253283 kubeconfig.go:125] found "pause-642487" server: "https://192.168.76.2:8443"
	I1115 10:27:39.977118  253283 kapi.go:59] client config for pause-642487: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-55448/.minikube/profiles/pause-642487/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-55448/.minikube/profiles/pause-642487/client.key", CAFile:"/home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 10:27:39.977645  253283 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1115 10:27:39.977660  253283 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1115 10:27:39.977667  253283 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1115 10:27:39.977673  253283 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1115 10:27:39.977682  253283 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1115 10:27:39.978236  253283 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 10:27:39.991964  253283 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1115 10:27:39.991999  253283 kubeadm.go:602] duration metric: took 27.323097ms to restartPrimaryControlPlane
	I1115 10:27:39.992009  253283 kubeadm.go:403] duration metric: took 206.34691ms to StartCluster
	I1115 10:27:39.992028  253283 settings.go:142] acquiring lock: {Name:mk94cfac0f6eef2479181468a7a8082a6e5c8f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:27:39.992086  253283 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:27:39.992864  253283 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:27:39.993140  253283 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:27:39.993414  253283 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:27:39.993634  253283 config.go:182] Loaded profile config "pause-642487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:27:39.997725  253283 out.go:179] * Enabled addons: 
	I1115 10:27:39.997900  253283 out.go:179] * Verifying Kubernetes components...
	I1115 10:27:37.713862  250315 out.go:204] * Another minikube instance is downloading dependencies... 
	I1115 10:27:38.352196  254667 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 10:27:38.352388  254667 start.go:159] libmachine.API.Create for "kubernetes-upgrade-914881" (driver="docker")
	I1115 10:27:38.352413  254667 client.go:173] LocalClient.Create starting
	I1115 10:27:38.352526  254667 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem
	I1115 10:27:38.352561  254667 main.go:143] libmachine: Decoding PEM data...
	I1115 10:27:38.352578  254667 main.go:143] libmachine: Parsing certificate...
	I1115 10:27:38.352624  254667 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem
	I1115 10:27:38.352642  254667 main.go:143] libmachine: Decoding PEM data...
	I1115 10:27:38.352656  254667 main.go:143] libmachine: Parsing certificate...
	I1115 10:27:38.352942  254667 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-914881 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 10:27:38.378137  254667 cli_runner.go:211] docker network inspect kubernetes-upgrade-914881 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 10:27:38.378243  254667 network_create.go:284] running [docker network inspect kubernetes-upgrade-914881] to gather additional debugging logs...
	I1115 10:27:38.378276  254667 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-914881
	W1115 10:27:38.397820  254667 cli_runner.go:211] docker network inspect kubernetes-upgrade-914881 returned with exit code 1
	I1115 10:27:38.397859  254667 network_create.go:287] error running [docker network inspect kubernetes-upgrade-914881]: docker network inspect kubernetes-upgrade-914881: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-914881 not found
	I1115 10:27:38.397879  254667 network_create.go:289] output of [docker network inspect kubernetes-upgrade-914881]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-914881 not found
	
	** /stderr **
	I1115 10:27:38.398042  254667 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:27:38.415280  254667 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-77644897380e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:42:f9:e2:83:c3:91} reservation:<nil>}
	I1115 10:27:38.415639  254667 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e808d03b2052 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:03:24:ef:92:a1} reservation:<nil>}
	I1115 10:27:38.415973  254667 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3fb45eeaa66e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:1a:b3:b7:0f:2e:99} reservation:<nil>}
	I1115 10:27:38.416314  254667 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-45fa74c79adc IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:2e:55:cb:22:c6:84} reservation:<nil>}
	I1115 10:27:38.416762  254667 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ea8210}
	I1115 10:27:38.416788  254667 network_create.go:124] attempt to create docker network kubernetes-upgrade-914881 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1115 10:27:38.416835  254667 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-914881 kubernetes-upgrade-914881
	I1115 10:27:38.463994  254667 network_create.go:108] docker network kubernetes-upgrade-914881 192.168.85.0/24 created
	I1115 10:27:38.464027  254667 kic.go:121] calculated static IP "192.168.85.2" for the "kubernetes-upgrade-914881" container
	I1115 10:27:38.464112  254667 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 10:27:38.485021  254667 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-914881 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-914881 --label created_by.minikube.sigs.k8s.io=true
	I1115 10:27:38.505381  254667 oci.go:103] Successfully created a docker volume kubernetes-upgrade-914881
	I1115 10:27:38.505470  254667 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-914881-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-914881 --entrypoint /usr/bin/test -v kubernetes-upgrade-914881:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 10:27:38.939065  254667 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-914881
	I1115 10:27:38.939162  254667 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1115 10:27:38.939182  254667 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 10:27:38.939259  254667 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-914881:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1115 10:27:41.817238  250315 image.go:63] Checking for docker.io/kicbase/stable:v0.0.42 in local cache directory
	I1115 10:27:41.817278  250315 image.go:66] Found docker.io/kicbase/stable:v0.0.42 in local cache directory, skipping pull
	I1115 10:27:41.817285  250315 image.go:105] docker.io/kicbase/stable:v0.0.42 exists in cache, skipping pull
	I1115 10:27:41.817302  250315 cache.go:152] successfully saved docker.io/kicbase/stable:v0.0.42 as a tarball
	I1115 10:27:41.817308  250315 cache.go:162] Loading docker.io/kicbase/stable:v0.0.42 from local cache
	I1115 10:27:43.305612  250315 cache.go:168] failed to download docker.io/kicbase/stable:v0.0.42, will try fallback image if available: error loading image: Error response from daemon: client version 1.43 is too old. Minimum supported API version is 1.44, please upgrade your client to a newer version
	E1115 10:27:43.305640  250315 cache.go:189] Error downloading kic artifacts:  failed to download kic base image or any fallback image
	I1115 10:27:43.305660  250315 cache.go:194] Successfully downloaded all kic artifacts
	I1115 10:27:43.305719  250315 start.go:365] acquiring machines lock for stopped-upgrade-567029: {Name:mk5336ae4d4d03321c8790135af3351b26bbd5f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:27:43.305858  250315 start.go:369] acquired machines lock for "stopped-upgrade-567029" in 117.264µs
	I1115 10:27:43.305901  250315 start.go:93] Provisioning new machine with config: &{Name:stopped-upgrade-567029 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-567029 Namespace:default APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: S
taticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:27:43.306025  250315 start.go:125] createHost starting for "" (driver="docker")
	I1115 10:27:41.817086  252977 cache.go:152] successfully saved docker.io/kicbase/stable:v0.0.42 as a tarball
	I1115 10:27:41.817099  252977 cache.go:162] Loading docker.io/kicbase/stable:v0.0.42 from local cache
	I1115 10:27:43.338074  252977 cache.go:168] failed to download docker.io/kicbase/stable:v0.0.42, will try fallback image if available: error loading image: Error response from daemon: client version 1.43 is too old. Minimum supported API version is 1.44, please upgrade your client to a newer version
	E1115 10:27:43.338104  252977 cache.go:189] Error downloading kic artifacts:  failed to download kic base image or any fallback image
	I1115 10:27:43.338124  252977 cache.go:194] Successfully downloaded all kic artifacts
	I1115 10:27:43.338174  252977 start.go:365] acquiring machines lock for missing-upgrade-229925: {Name:mkeb4f626477dd186111fb07e6d25c72f7129196 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:27:43.338284  252977 start.go:369] acquired machines lock for "missing-upgrade-229925" in 94.795µs
	I1115 10:27:43.338306  252977 start.go:93] Provisioning new machine with config: &{Name:missing-upgrade-229925 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:missing-upgrade-229925 Namespace:default APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: S
taticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:27:43.338369  252977 start.go:125] createHost starting for "" (driver="docker")
	I1115 10:27:43.374820  252977 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 10:27:43.375196  252977 start.go:159] libmachine.API.Create for "missing-upgrade-229925" (driver="docker")
	I1115 10:27:43.375265  252977 client.go:168] LocalClient.Create starting
	I1115 10:27:43.375355  252977 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem
	I1115 10:27:43.375405  252977 main.go:141] libmachine: Decoding PEM data...
	I1115 10:27:43.375431  252977 main.go:141] libmachine: Parsing certificate...
	I1115 10:27:43.375513  252977 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem
	I1115 10:27:43.375549  252977 main.go:141] libmachine: Decoding PEM data...
	I1115 10:27:43.375561  252977 main.go:141] libmachine: Parsing certificate...
	I1115 10:27:43.376084  252977 cli_runner.go:164] Run: docker network inspect missing-upgrade-229925 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 10:27:43.395535  252977 cli_runner.go:211] docker network inspect missing-upgrade-229925 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 10:27:43.395645  252977 network_create.go:281] running [docker network inspect missing-upgrade-229925] to gather additional debugging logs...
	I1115 10:27:43.395664  252977 cli_runner.go:164] Run: docker network inspect missing-upgrade-229925
	W1115 10:27:43.412435  252977 cli_runner.go:211] docker network inspect missing-upgrade-229925 returned with exit code 1
	I1115 10:27:43.412460  252977 network_create.go:284] error running [docker network inspect missing-upgrade-229925]: docker network inspect missing-upgrade-229925: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-229925 not found
	I1115 10:27:43.412475  252977 network_create.go:286] output of [docker network inspect missing-upgrade-229925]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-229925 not found
	
	** /stderr **
	I1115 10:27:43.412574  252977 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:27:43.431071  252977 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-77644897380e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:42:f9:e2:83:c3:91} reservation:<nil>}
	I1115 10:27:43.431752  252977 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e808d03b2052 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:03:24:ef:92:a1} reservation:<nil>}
	I1115 10:27:43.432405  252977 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3fb45eeaa66e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:1a:b3:b7:0f:2e:99} reservation:<nil>}
	I1115 10:27:43.432980  252977 network.go:214] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-45fa74c79adc IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:2e:55:cb:22:c6:84} reservation:<nil>}
	I1115 10:27:43.433474  252977 network.go:214] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-440e841b6fd0 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:b2:17:71:11:a4:04} reservation:<nil>}
	I1115 10:27:43.434103  252977 network.go:209] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022a07f0}
	I1115 10:27:43.434131  252977 network_create.go:124] attempt to create docker network missing-upgrade-229925 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1115 10:27:43.434177  252977 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-229925 missing-upgrade-229925
	I1115 10:27:43.697934  252977 network_create.go:108] docker network missing-upgrade-229925 192.168.94.0/24 created
	I1115 10:27:43.697987  252977 kic.go:121] calculated static IP "192.168.94.2" for the "missing-upgrade-229925" container
	I1115 10:27:43.698067  252977 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 10:27:43.717462  252977 cli_runner.go:164] Run: docker volume create missing-upgrade-229925 --label name.minikube.sigs.k8s.io=missing-upgrade-229925 --label created_by.minikube.sigs.k8s.io=true
	I1115 10:27:39.999193  253283 addons.go:515] duration metric: took 5.782194ms for enable addons: enabled=[]
	I1115 10:27:39.999238  253283 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:27:40.391636  253283 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:27:40.469203  253283 node_ready.go:35] waiting up to 6m0s for node "pause-642487" to be "Ready" ...
	I1115 10:27:42.371811  253283 node_ready.go:49] node "pause-642487" is "Ready"
	I1115 10:27:42.371845  253283 node_ready.go:38] duration metric: took 1.902605378s for node "pause-642487" to be "Ready" ...
	I1115 10:27:42.371859  253283 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:27:42.371913  253283 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:27:42.457526  253283 api_server.go:72] duration metric: took 2.464329595s to wait for apiserver process to appear ...
	I1115 10:27:42.457621  253283 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:27:42.457657  253283 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:27:42.468975  253283 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W1115 10:27:42.469020  253283 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I1115 10:27:42.958724  253283 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:27:42.963165  253283 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:27:42.963196  253283 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:27:43.457831  253283 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:27:43.463627  253283 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:27:43.463658  253283 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:27:43.957865  253283 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:27:43.961677  253283 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:27:43.961698  253283 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:27:44.458384  253283 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:27:44.462587  253283 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:27:44.462615  253283 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:27:43.374851  250315 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 10:27:43.375263  250315 start.go:159] libmachine.API.Create for "stopped-upgrade-567029" (driver="docker")
	I1115 10:27:43.375310  250315 client.go:168] LocalClient.Create starting
	I1115 10:27:43.375674  250315 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem
	I1115 10:27:43.375738  250315 main.go:141] libmachine: Decoding PEM data...
	I1115 10:27:43.375758  250315 main.go:141] libmachine: Parsing certificate...
	I1115 10:27:43.375873  250315 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem
	I1115 10:27:43.375914  250315 main.go:141] libmachine: Decoding PEM data...
	I1115 10:27:43.375928  250315 main.go:141] libmachine: Parsing certificate...
	I1115 10:27:43.376470  250315 cli_runner.go:164] Run: docker network inspect stopped-upgrade-567029 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 10:27:43.395172  250315 cli_runner.go:211] docker network inspect stopped-upgrade-567029 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 10:27:43.395246  250315 network_create.go:281] running [docker network inspect stopped-upgrade-567029] to gather additional debugging logs...
	I1115 10:27:43.395260  250315 cli_runner.go:164] Run: docker network inspect stopped-upgrade-567029
	W1115 10:27:43.412700  250315 cli_runner.go:211] docker network inspect stopped-upgrade-567029 returned with exit code 1
	I1115 10:27:43.412721  250315 network_create.go:284] error running [docker network inspect stopped-upgrade-567029]: docker network inspect stopped-upgrade-567029: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network stopped-upgrade-567029 not found
	I1115 10:27:43.412736  250315 network_create.go:286] output of [docker network inspect stopped-upgrade-567029]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network stopped-upgrade-567029 not found
	
	** /stderr **
	I1115 10:27:43.412843  250315 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:27:43.430507  250315 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-77644897380e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:42:f9:e2:83:c3:91} reservation:<nil>}
	I1115 10:27:43.431143  250315 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e808d03b2052 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:03:24:ef:92:a1} reservation:<nil>}
	I1115 10:27:43.431842  250315 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3fb45eeaa66e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:1a:b3:b7:0f:2e:99} reservation:<nil>}
	I1115 10:27:43.432536  250315 network.go:214] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-45fa74c79adc IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:2e:55:cb:22:c6:84} reservation:<nil>}
	I1115 10:27:43.433289  250315 network.go:214] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-440e841b6fd0 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:b2:17:71:11:a4:04} reservation:<nil>}
	I1115 10:27:43.435173  250315 network.go:212] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1115 10:27:43.435775  250315 network.go:209] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002e24a30}
	I1115 10:27:43.435792  250315 network_create.go:124] attempt to create docker network stopped-upgrade-567029 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1115 10:27:43.435834  250315 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=stopped-upgrade-567029 stopped-upgrade-567029
	I1115 10:27:43.708414  250315 network_create.go:108] docker network stopped-upgrade-567029 192.168.103.0/24 created
	I1115 10:27:43.708469  250315 kic.go:121] calculated static IP "192.168.103.2" for the "stopped-upgrade-567029" container
	I1115 10:27:43.708547  250315 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 10:27:43.727521  250315 cli_runner.go:164] Run: docker volume create stopped-upgrade-567029 --label name.minikube.sigs.k8s.io=stopped-upgrade-567029 --label created_by.minikube.sigs.k8s.io=true
	I1115 10:27:43.752646  250315 oci.go:103] Successfully created a docker volume stopped-upgrade-567029
	I1115 10:27:43.752732  250315 cli_runner.go:164] Run: docker run --rm --name stopped-upgrade-567029-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=stopped-upgrade-567029 --entrypoint /usr/bin/test -v stopped-upgrade-567029:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1115 10:27:45.786630  254667 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-914881:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (6.847280473s)
	I1115 10:27:45.786668  254667 kic.go:203] duration metric: took 6.84748174s to extract preloaded images to volume ...
	W1115 10:27:45.786841  254667 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1115 10:27:45.787011  254667 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 10:27:45.855097  254667 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-914881 --name kubernetes-upgrade-914881 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-914881 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-914881 --network kubernetes-upgrade-914881 --ip 192.168.85.2 --volume kubernetes-upgrade-914881:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 10:27:46.160541  254667 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-914881 --format={{.State.Running}}
	I1115 10:27:46.178349  254667 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-914881 --format={{.State.Status}}
	I1115 10:27:46.196723  254667 cli_runner.go:164] Run: docker exec kubernetes-upgrade-914881 stat /var/lib/dpkg/alternatives/iptables
	I1115 10:27:46.243070  254667 oci.go:144] the created container "kubernetes-upgrade-914881" has a running status.
	I1115 10:27:46.243100  254667 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/kubernetes-upgrade-914881/id_rsa...
	I1115 10:27:47.102486  254667 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21894-55448/.minikube/machines/kubernetes-upgrade-914881/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 10:27:47.125603  254667 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-914881 --format={{.State.Status}}
	I1115 10:27:47.142848  254667 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 10:27:47.142868  254667 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-914881 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 10:27:47.190891  254667 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-914881 --format={{.State.Status}}
	I1115 10:27:47.208000  254667 machine.go:94] provisionDockerMachine start ...
	I1115 10:27:47.208103  254667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-914881
	I1115 10:27:47.224325  254667 main.go:143] libmachine: Using SSH client type: native
	I1115 10:27:47.224573  254667 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33004 <nil> <nil>}
	I1115 10:27:47.224587  254667 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:27:47.351864  254667 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-914881
	
	I1115 10:27:47.351893  254667 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-914881"
	I1115 10:27:47.351996  254667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-914881
	I1115 10:27:47.371600  254667 main.go:143] libmachine: Using SSH client type: native
	I1115 10:27:47.371893  254667 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33004 <nil> <nil>}
	I1115 10:27:47.371915  254667 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-914881 && echo "kubernetes-upgrade-914881" | sudo tee /etc/hostname
	I1115 10:27:47.518404  254667 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-914881
	
	I1115 10:27:47.518474  254667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-914881
	I1115 10:27:47.535215  254667 main.go:143] libmachine: Using SSH client type: native
	I1115 10:27:47.535461  254667 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33004 <nil> <nil>}
	I1115 10:27:47.535490  254667 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-914881' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-914881/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-914881' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:27:47.662480  254667 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:27:47.662516  254667 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-55448/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-55448/.minikube}
	I1115 10:27:47.662558  254667 ubuntu.go:190] setting up certificates
	I1115 10:27:47.662574  254667 provision.go:84] configureAuth start
	I1115 10:27:47.662636  254667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-914881
	I1115 10:27:47.679273  254667 provision.go:143] copyHostCerts
	I1115 10:27:47.679336  254667 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem, removing ...
	I1115 10:27:47.679348  254667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem
	I1115 10:27:47.679412  254667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem (1082 bytes)
	I1115 10:27:47.679496  254667 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem, removing ...
	I1115 10:27:47.679507  254667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem
	I1115 10:27:47.679533  254667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem (1123 bytes)
	I1115 10:27:47.679588  254667 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem, removing ...
	I1115 10:27:47.679594  254667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem
	I1115 10:27:47.679615  254667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem (1679 bytes)
	I1115 10:27:47.679671  254667 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-914881 san=[127.0.0.1 192.168.85.2 kubernetes-upgrade-914881 localhost minikube]
	I1115 10:27:47.764722  254667 provision.go:177] copyRemoteCerts
	I1115 10:27:47.764794  254667 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:27:47.764830  254667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-914881
	I1115 10:27:47.781830  254667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/kubernetes-upgrade-914881/id_rsa Username:docker}
	I1115 10:27:47.877066  254667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1115 10:27:47.900835  254667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1115 10:27:47.919996  254667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:27:47.938762  254667 provision.go:87] duration metric: took 276.168397ms to configureAuth
	I1115 10:27:47.938793  254667 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:27:47.939024  254667 config.go:182] Loaded profile config "kubernetes-upgrade-914881": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1115 10:27:47.939154  254667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-914881
	I1115 10:27:47.958377  254667 main.go:143] libmachine: Using SSH client type: native
	I1115 10:27:47.958722  254667 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33004 <nil> <nil>}
	I1115 10:27:47.958750  254667 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:27:43.751740  252977 oci.go:103] Successfully created a docker volume missing-upgrade-229925
	I1115 10:27:43.754521  252977 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-229925-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-229925 --entrypoint /usr/bin/test -v missing-upgrade-229925:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1115 10:27:44.958434  253283 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:27:44.962565  253283 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:27:44.962596  253283 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:27:45.458211  253283 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:27:45.462489  253283 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:27:45.462516  253283 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:27:45.958128  253283 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:27:45.962051  253283 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1115 10:27:45.962944  253283 api_server.go:141] control plane version: v1.34.1
	I1115 10:27:45.962981  253283 api_server.go:131] duration metric: took 3.505340863s to wait for apiserver health ...
	I1115 10:27:45.962991  253283 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:27:45.965940  253283 system_pods.go:59] 7 kube-system pods found
	I1115 10:27:45.965995  253283 system_pods.go:61] "coredns-66bc5c9577-8nbgb" [9f4b3526-3889-4bc5-81e0-cbab60c70c2d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:27:45.966007  253283 system_pods.go:61] "etcd-pause-642487" [8d12057d-2b14-41f1-b978-6ae6055dd411] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:27:45.966013  253283 system_pods.go:61] "kindnet-jh5hv" [17e3aa21-2ac9-4ce2-9a63-54e13281bde5] Running
	I1115 10:27:45.966023  253283 system_pods.go:61] "kube-apiserver-pause-642487" [d989168d-4a2d-4db8-a4ac-068d4907344c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:27:45.966031  253283 system_pods.go:61] "kube-controller-manager-pause-642487" [f8184dcf-1283-40bb-bf91-612578293a38] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:27:45.966038  253283 system_pods.go:61] "kube-proxy-jhknt" [66848a4f-7a86-4b64-adb1-2ebb61ff9ddc] Running
	I1115 10:27:45.966043  253283 system_pods.go:61] "kube-scheduler-pause-642487" [b34e6f2e-392d-4b42-be08-6b7bf986de7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:27:45.966052  253283 system_pods.go:74] duration metric: took 3.052477ms to wait for pod list to return data ...
	I1115 10:27:45.966063  253283 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:27:45.968084  253283 default_sa.go:45] found service account: "default"
	I1115 10:27:45.968103  253283 default_sa.go:55] duration metric: took 2.031853ms for default service account to be created ...
	I1115 10:27:45.968111  253283 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:27:45.970452  253283 system_pods.go:86] 7 kube-system pods found
	I1115 10:27:45.970484  253283 system_pods.go:89] "coredns-66bc5c9577-8nbgb" [9f4b3526-3889-4bc5-81e0-cbab60c70c2d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:27:45.970492  253283 system_pods.go:89] "etcd-pause-642487" [8d12057d-2b14-41f1-b978-6ae6055dd411] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:27:45.970497  253283 system_pods.go:89] "kindnet-jh5hv" [17e3aa21-2ac9-4ce2-9a63-54e13281bde5] Running
	I1115 10:27:45.970502  253283 system_pods.go:89] "kube-apiserver-pause-642487" [d989168d-4a2d-4db8-a4ac-068d4907344c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:27:45.970508  253283 system_pods.go:89] "kube-controller-manager-pause-642487" [f8184dcf-1283-40bb-bf91-612578293a38] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:27:45.970514  253283 system_pods.go:89] "kube-proxy-jhknt" [66848a4f-7a86-4b64-adb1-2ebb61ff9ddc] Running
	I1115 10:27:45.970519  253283 system_pods.go:89] "kube-scheduler-pause-642487" [b34e6f2e-392d-4b42-be08-6b7bf986de7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:27:45.970532  253283 system_pods.go:126] duration metric: took 2.414128ms to wait for k8s-apps to be running ...
	I1115 10:27:45.970545  253283 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:27:45.970592  253283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:27:45.984334  253283 system_svc.go:56] duration metric: took 13.778291ms WaitForService to wait for kubelet
	I1115 10:27:45.984365  253283 kubeadm.go:587] duration metric: took 5.991197356s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:27:45.984387  253283 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:27:45.987043  253283 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:27:45.987072  253283 node_conditions.go:123] node cpu capacity is 8
	I1115 10:27:45.987086  253283 node_conditions.go:105] duration metric: took 2.692928ms to run NodePressure ...
	I1115 10:27:45.987102  253283 start.go:242] waiting for startup goroutines ...
	I1115 10:27:45.987118  253283 start.go:247] waiting for cluster config update ...
	I1115 10:27:45.987133  253283 start.go:256] writing updated cluster config ...
	I1115 10:27:45.987422  253283 ssh_runner.go:195] Run: rm -f paused
	I1115 10:27:45.990985  253283 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:27:45.991414  253283 kapi.go:59] client config for pause-642487: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-55448/.minikube/profiles/pause-642487/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-55448/.minikube/profiles/pause-642487/client.key", CAFile:"/home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 10:27:45.993575  253283 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8nbgb" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 10:27:47.999106  253283 pod_ready.go:104] pod "coredns-66bc5c9577-8nbgb" is not "Ready", error: <nil>
	I1115 10:27:48.205288  254667 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:27:48.205310  254667 machine.go:97] duration metric: took 997.280736ms to provisionDockerMachine
	I1115 10:27:48.205320  254667 client.go:176] duration metric: took 9.852898594s to LocalClient.Create
	I1115 10:27:48.205341  254667 start.go:167] duration metric: took 9.852953632s to libmachine.API.Create "kubernetes-upgrade-914881"
	I1115 10:27:48.205348  254667 start.go:293] postStartSetup for "kubernetes-upgrade-914881" (driver="docker")
	I1115 10:27:48.205361  254667 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:27:48.205422  254667 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:27:48.205463  254667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-914881
	I1115 10:27:48.222450  254667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/kubernetes-upgrade-914881/id_rsa Username:docker}
	I1115 10:27:48.318261  254667 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:27:48.322250  254667 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:27:48.322279  254667 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:27:48.322292  254667 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/addons for local assets ...
	I1115 10:27:48.322352  254667 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/files for local assets ...
	I1115 10:27:48.322464  254667 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem -> 589622.pem in /etc/ssl/certs
	I1115 10:27:48.322600  254667 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:27:48.330387  254667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:27:48.353166  254667 start.go:296] duration metric: took 147.798785ms for postStartSetup
	I1115 10:27:48.353561  254667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-914881
	I1115 10:27:48.372622  254667 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/config.json ...
	I1115 10:27:48.372880  254667 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:27:48.372932  254667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-914881
	I1115 10:27:48.391647  254667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/kubernetes-upgrade-914881/id_rsa Username:docker}
	I1115 10:27:48.487219  254667 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:27:48.492802  254667 start.go:128] duration metric: took 10.142230633s to createHost
	I1115 10:27:48.492830  254667 start.go:83] releasing machines lock for "kubernetes-upgrade-914881", held for 10.142367049s
	I1115 10:27:48.492902  254667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-914881
	I1115 10:27:48.511141  254667 ssh_runner.go:195] Run: cat /version.json
	I1115 10:27:48.511202  254667 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:27:48.511258  254667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-914881
	I1115 10:27:48.511203  254667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-914881
	I1115 10:27:48.528738  254667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/kubernetes-upgrade-914881/id_rsa Username:docker}
	I1115 10:27:48.528985  254667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/kubernetes-upgrade-914881/id_rsa Username:docker}
	I1115 10:27:48.680757  254667 ssh_runner.go:195] Run: systemctl --version
	I1115 10:27:48.687292  254667 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:27:48.721053  254667 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:27:48.725744  254667 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:27:48.725800  254667 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:27:48.750323  254667 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1115 10:27:48.750354  254667 start.go:496] detecting cgroup driver to use...
	I1115 10:27:48.750391  254667 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:27:48.750439  254667 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:27:48.765399  254667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:27:48.777585  254667 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:27:48.777648  254667 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:27:48.793032  254667 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:27:48.809028  254667 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:27:48.906462  254667 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:27:49.001586  254667 docker.go:234] disabling docker service ...
	I1115 10:27:49.001647  254667 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:27:49.020352  254667 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:27:49.032487  254667 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:27:49.134766  254667 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:27:49.219392  254667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:27:49.232446  254667 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:27:49.246035  254667 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1115 10:27:49.246100  254667 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:27:49.256066  254667 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:27:49.256121  254667 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:27:49.264478  254667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:27:49.272817  254667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:27:49.281545  254667 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:27:49.289205  254667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:27:49.297519  254667 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:27:49.311352  254667 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:27:49.320035  254667 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:27:49.327113  254667 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:27:49.334281  254667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:27:49.424343  254667 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:27:49.759791  254667 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:27:49.759897  254667 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:27:49.763971  254667 start.go:564] Will wait 60s for crictl version
	I1115 10:27:49.764033  254667 ssh_runner.go:195] Run: which crictl
	I1115 10:27:49.767517  254667 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:27:49.790125  254667 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:27:49.790193  254667 ssh_runner.go:195] Run: crio --version
	I1115 10:27:49.816493  254667 ssh_runner.go:195] Run: crio --version
	I1115 10:27:49.844287  254667 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1115 10:27:49.845502  254667 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-914881 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:27:49.861680  254667 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1115 10:27:49.865661  254667 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:27:49.877199  254667 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-914881 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-914881 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:27:49.877374  254667 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1115 10:27:49.877430  254667 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:27:49.916370  254667 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:27:49.916399  254667 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:27:49.916455  254667 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:27:49.943783  254667 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:27:49.943809  254667 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:27:49.943820  254667 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1115 10:27:49.943931  254667 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-914881 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-914881 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:27:49.944034  254667 ssh_runner.go:195] Run: crio config
	I1115 10:27:49.994837  254667 cni.go:84] Creating CNI manager for ""
	I1115 10:27:49.994863  254667 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:27:49.994887  254667 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:27:49.994913  254667 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-914881 NodeName:kubernetes-upgrade-914881 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:27:49.995092  254667 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-914881"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:27:49.995177  254667 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1115 10:27:50.004130  254667 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:27:50.004198  254667 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:27:50.011936  254667 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1115 10:27:50.024324  254667 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:27:50.038924  254667 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I1115 10:27:50.051575  254667 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:27:50.055042  254667 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:27:50.064612  254667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:27:50.153630  254667 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:27:50.182999  254667 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881 for IP: 192.168.85.2
	I1115 10:27:50.183028  254667 certs.go:195] generating shared ca certs ...
	I1115 10:27:50.183050  254667 certs.go:227] acquiring lock for ca certs: {Name:mkec8c0ab2b564f4fafe1f7fc86089efa994f6db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:27:50.183237  254667 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key
	I1115 10:27:50.183306  254667 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key
	I1115 10:27:50.183322  254667 certs.go:257] generating profile certs ...
	I1115 10:27:50.183395  254667 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/client.key
	I1115 10:27:50.183423  254667 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/client.crt with IP's: []
	I1115 10:27:50.395529  254667 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/client.crt ...
	I1115 10:27:50.395561  254667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/client.crt: {Name:mk573464f21868e08a58dc2e57c10697a3e4721a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:27:50.395733  254667 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/client.key ...
	I1115 10:27:50.395746  254667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/client.key: {Name:mk2905b5bd443614626e6860396d60c0ab5adc65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:27:50.395823  254667 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/apiserver.key.440325a9
	I1115 10:27:50.395839  254667 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/apiserver.crt.440325a9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1115 10:27:50.955042  254667 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/apiserver.crt.440325a9 ...
	I1115 10:27:50.955072  254667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/apiserver.crt.440325a9: {Name:mk8a5f94563e3ebbc844e953848ac9e54091501b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:27:50.955275  254667 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/apiserver.key.440325a9 ...
	I1115 10:27:50.955297  254667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/apiserver.key.440325a9: {Name:mk8e02912c9e23cba80d97b436b974f14f651eab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:27:50.955418  254667 certs.go:382] copying /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/apiserver.crt.440325a9 -> /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/apiserver.crt
	I1115 10:27:50.955526  254667 certs.go:386] copying /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/apiserver.key.440325a9 -> /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/apiserver.key
	I1115 10:27:50.955616  254667 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/proxy-client.key
	I1115 10:27:50.955638  254667 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/proxy-client.crt with IP's: []
	I1115 10:27:51.092835  254667 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/proxy-client.crt ...
	I1115 10:27:51.092863  254667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/proxy-client.crt: {Name:mk2b5bfc15d7bfb5cfc4a1da4ee531587d453a54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:27:51.093072  254667 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/proxy-client.key ...
	I1115 10:27:51.093092  254667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/proxy-client.key: {Name:mk1f67f5d6e5f744fe14c4d47c5b32ed3babc210 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:27:51.093326  254667 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem (1338 bytes)
	W1115 10:27:51.093371  254667 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962_empty.pem, impossibly tiny 0 bytes
	I1115 10:27:51.093389  254667 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:27:51.093429  254667 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:27:51.093462  254667 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:27:51.093497  254667 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem (1679 bytes)
	I1115 10:27:51.093551  254667 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:27:51.094237  254667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:27:51.113018  254667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:27:51.129793  254667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:27:51.146567  254667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:27:51.163226  254667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1115 10:27:51.180263  254667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 10:27:51.197808  254667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:27:51.220575  254667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 10:27:51.240269  254667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:27:51.263761  254667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem --> /usr/share/ca-certificates/58962.pem (1338 bytes)
	I1115 10:27:51.281699  254667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /usr/share/ca-certificates/589622.pem (1708 bytes)
	I1115 10:27:51.302484  254667 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:27:51.315629  254667 ssh_runner.go:195] Run: openssl version
	I1115 10:27:51.322346  254667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:27:51.331251  254667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:27:51.335462  254667 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:41 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:27:51.335514  254667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:27:51.370131  254667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:27:51.378463  254667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/58962.pem && ln -fs /usr/share/ca-certificates/58962.pem /etc/ssl/certs/58962.pem"
	I1115 10:27:51.387350  254667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/58962.pem
	I1115 10:27:51.391076  254667 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:47 /usr/share/ca-certificates/58962.pem
	I1115 10:27:51.391132  254667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/58962.pem
	I1115 10:27:51.425820  254667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/58962.pem /etc/ssl/certs/51391683.0"
	I1115 10:27:51.434246  254667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/589622.pem && ln -fs /usr/share/ca-certificates/589622.pem /etc/ssl/certs/589622.pem"
	I1115 10:27:51.442466  254667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/589622.pem
	I1115 10:27:51.446401  254667 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:47 /usr/share/ca-certificates/589622.pem
	I1115 10:27:51.446462  254667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/589622.pem
	I1115 10:27:51.480505  254667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/589622.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:27:51.488946  254667 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:27:51.492791  254667 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 10:27:51.492853  254667 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-914881 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-914881 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:27:51.492982  254667 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:27:51.493035  254667 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:27:51.520878  254667 cri.go:89] found id: ""
	I1115 10:27:51.520948  254667 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:27:51.528888  254667 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 10:27:51.536525  254667 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 10:27:51.536575  254667 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 10:27:51.544138  254667 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 10:27:51.544156  254667 kubeadm.go:158] found existing configuration files:
	
	I1115 10:27:51.544219  254667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 10:27:51.552035  254667 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 10:27:51.552101  254667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 10:27:51.559261  254667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 10:27:51.566725  254667 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 10:27:51.566783  254667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 10:27:51.573569  254667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 10:27:51.581012  254667 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 10:27:51.581065  254667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 10:27:51.587845  254667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 10:27:51.594922  254667 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 10:27:51.595000  254667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 10:27:51.601799  254667 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 10:27:51.654293  254667 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1115 10:27:51.654375  254667 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 10:27:51.692047  254667 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 10:27:51.692138  254667 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1115 10:27:51.692200  254667 kubeadm.go:319] OS: Linux
	I1115 10:27:51.692266  254667 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 10:27:51.692326  254667 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1115 10:27:51.692393  254667 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 10:27:51.692458  254667 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 10:27:51.692528  254667 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 10:27:51.692594  254667 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 10:27:51.692673  254667 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 10:27:51.692744  254667 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 10:27:51.692840  254667 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1115 10:27:51.761320  254667 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 10:27:51.761456  254667 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 10:27:51.761600  254667 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1115 10:27:51.927754  254667 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1115 10:27:51.930525  254667 out.go:252]   - Generating certificates and keys ...
	I1115 10:27:51.930661  254667 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 10:27:51.930777  254667 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 10:27:52.140117  254667 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 10:27:52.256187  254667 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 10:27:52.386520  254667 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 10:27:52.559761  254667 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 10:27:52.736556  254667 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 10:27:52.736726  254667 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-914881 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1115 10:27:52.827015  254667 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 10:27:52.827211  254667 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-914881 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1115 10:27:53.019232  254667 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 10:27:53.253702  254667 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 10:27:53.336002  254667 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 10:27:53.336085  254667 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 10:27:53.421057  254667 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 10:27:53.591346  254667 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 10:27:53.685695  254667 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 10:27:53.786443  254667 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 10:27:53.787201  254667 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 10:27:53.791788  254667 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1115 10:27:49.999544  253283 pod_ready.go:104] pod "coredns-66bc5c9577-8nbgb" is not "Ready", error: <nil>
	I1115 10:27:50.499397  253283 pod_ready.go:94] pod "coredns-66bc5c9577-8nbgb" is "Ready"
	I1115 10:27:50.499425  253283 pod_ready.go:86] duration metric: took 4.505831755s for pod "coredns-66bc5c9577-8nbgb" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:27:50.501755  253283 pod_ready.go:83] waiting for pod "etcd-pause-642487" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:27:50.505213  253283 pod_ready.go:94] pod "etcd-pause-642487" is "Ready"
	I1115 10:27:50.505232  253283 pod_ready.go:86] duration metric: took 3.458764ms for pod "etcd-pause-642487" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:27:50.507060  253283 pod_ready.go:83] waiting for pod "kube-apiserver-pause-642487" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 10:27:52.512599  253283 pod_ready.go:104] pod "kube-apiserver-pause-642487" is not "Ready", error: <nil>
	W1115 10:27:55.012760  253283 pod_ready.go:104] pod "kube-apiserver-pause-642487" is not "Ready", error: <nil>
	I1115 10:27:56.512499  253283 pod_ready.go:94] pod "kube-apiserver-pause-642487" is "Ready"
	I1115 10:27:56.512533  253283 pod_ready.go:86] duration metric: took 6.00544877s for pod "kube-apiserver-pause-642487" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:27:56.514778  253283 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-642487" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:27:56.518829  253283 pod_ready.go:94] pod "kube-controller-manager-pause-642487" is "Ready"
	I1115 10:27:56.518852  253283 pod_ready.go:86] duration metric: took 4.05134ms for pod "kube-controller-manager-pause-642487" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:27:56.520763  253283 pod_ready.go:83] waiting for pod "kube-proxy-jhknt" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:27:56.524616  253283 pod_ready.go:94] pod "kube-proxy-jhknt" is "Ready"
	I1115 10:27:56.524637  253283 pod_ready.go:86] duration metric: took 3.853942ms for pod "kube-proxy-jhknt" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:27:56.526524  253283 pod_ready.go:83] waiting for pod "kube-scheduler-pause-642487" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:27:56.711833  253283 pod_ready.go:94] pod "kube-scheduler-pause-642487" is "Ready"
	I1115 10:27:56.711863  253283 pod_ready.go:86] duration metric: took 185.314944ms for pod "kube-scheduler-pause-642487" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:27:56.711878  253283 pod_ready.go:40] duration metric: took 10.720860275s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:27:56.761368  253283 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:27:56.763472  253283 out.go:179] * Done! kubectl is now configured to use "pause-642487" cluster and "default" namespace by default
	I1115 10:27:53.793470  254667 out.go:252]   - Booting up control plane ...
	I1115 10:27:53.793587  254667 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 10:27:53.793699  254667 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 10:27:53.794263  254667 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 10:27:53.809602  254667 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 10:27:53.810412  254667 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 10:27:53.810492  254667 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 10:27:53.912862  254667 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1115 10:27:59.415014  254667 kubeadm.go:319] [apiclient] All control plane components are healthy after 5.502184 seconds
	I1115 10:27:59.415188  254667 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 10:27:59.428114  254667 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 10:27:59.955729  254667 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 10:27:59.956062  254667 kubeadm.go:319] [mark-control-plane] Marking the node kubernetes-upgrade-914881 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 10:28:00.467057  254667 kubeadm.go:319] [bootstrap-token] Using token: 3r7r6m.gk79loswr1cjfhrb
	I1115 10:28:00.468280  254667 out.go:252]   - Configuring RBAC rules ...
	I1115 10:28:00.468448  254667 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 10:28:00.473111  254667 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 10:28:00.480744  254667 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 10:28:00.483733  254667 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 10:28:00.486741  254667 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 10:28:00.490472  254667 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 10:28:00.500916  254667 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 10:28:00.722161  254667 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 10:28:00.884210  254667 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 10:28:00.885627  254667 kubeadm.go:319] 
	I1115 10:28:00.885851  254667 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 10:28:00.885872  254667 kubeadm.go:319] 
	I1115 10:28:00.885994  254667 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 10:28:00.886010  254667 kubeadm.go:319] 
	I1115 10:28:00.886063  254667 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 10:28:00.886168  254667 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 10:28:00.886239  254667 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 10:28:00.886245  254667 kubeadm.go:319] 
	I1115 10:28:00.886338  254667 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 10:28:00.886359  254667 kubeadm.go:319] 
	I1115 10:28:00.886543  254667 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 10:28:00.886565  254667 kubeadm.go:319] 
	I1115 10:28:00.886633  254667 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 10:28:00.886738  254667 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 10:28:00.886829  254667 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 10:28:00.886843  254667 kubeadm.go:319] 
	I1115 10:28:00.886975  254667 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 10:28:00.887101  254667 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 10:28:00.887130  254667 kubeadm.go:319] 
	I1115 10:28:00.887254  254667 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 3r7r6m.gk79loswr1cjfhrb \
	I1115 10:28:00.887385  254667 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c958101d3a67868123c5e439a6c1ea6e30a99a6538b76cbe461e2c919cf1aec \
	I1115 10:28:00.887409  254667 kubeadm.go:319] 	--control-plane 
	I1115 10:28:00.887415  254667 kubeadm.go:319] 
	I1115 10:28:00.887518  254667 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 10:28:00.887524  254667 kubeadm.go:319] 
	I1115 10:28:00.887621  254667 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 3r7r6m.gk79loswr1cjfhrb \
	I1115 10:28:00.887748  254667 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c958101d3a67868123c5e439a6c1ea6e30a99a6538b76cbe461e2c919cf1aec 
	I1115 10:28:00.890489  254667 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1115 10:28:00.890642  254667 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 10:28:00.890672  254667 cni.go:84] Creating CNI manager for ""
	I1115 10:28:00.890681  254667 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:28:00.892859  254667 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1115 10:28:00.894191  254667 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 10:28:00.901181  254667 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1115 10:28:00.901202  254667 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 10:28:00.920852  254667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 10:28:02.018144  254667 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.097235724s)
	I1115 10:28:02.018204  254667 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 10:28:02.018318  254667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:28:02.018318  254667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kubernetes-upgrade-914881 minikube.k8s.io/updated_at=2025_11_15T10_28_02_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510 minikube.k8s.io/name=kubernetes-upgrade-914881 minikube.k8s.io/primary=true
	I1115 10:28:02.031122  254667 ops.go:34] apiserver oom_adj: -16
	I1115 10:28:02.135280  254667 kubeadm.go:1114] duration metric: took 117.027873ms to wait for elevateKubeSystemPrivileges
	I1115 10:28:02.146689  254667 kubeadm.go:403] duration metric: took 10.653833498s to StartCluster
	I1115 10:28:02.146732  254667 settings.go:142] acquiring lock: {Name:mk94cfac0f6eef2479181468a7a8082a6e5c8f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:28:02.146813  254667 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:28:02.148209  254667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:28:02.148471  254667 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:28:02.148495  254667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 10:28:02.148658  254667 config.go:182] Loaded profile config "kubernetes-upgrade-914881": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1115 10:28:02.148612  254667 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:28:02.148729  254667 addons.go:70] Setting storage-provisioner=true in profile "kubernetes-upgrade-914881"
	I1115 10:28:02.148748  254667 addons.go:239] Setting addon storage-provisioner=true in "kubernetes-upgrade-914881"
	I1115 10:28:02.148775  254667 addons.go:70] Setting default-storageclass=true in profile "kubernetes-upgrade-914881"
	I1115 10:28:02.148851  254667 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-914881"
	I1115 10:28:02.148789  254667 host.go:66] Checking if "kubernetes-upgrade-914881" exists ...
	I1115 10:28:02.149511  254667 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-914881 --format={{.State.Status}}
	I1115 10:28:02.149901  254667 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-914881 --format={{.State.Status}}
	I1115 10:28:02.150813  254667 out.go:179] * Verifying Kubernetes components...
	I1115 10:28:02.151920  254667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:28:02.175349  254667 kapi.go:59] client config for kubernetes-upgrade-914881: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kubernetes-upgrade-914881/client.key", CAFile:"/home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 10:28:02.175975  254667 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	
	==> CRI-O <==
	Nov 15 10:27:38 pause-642487 crio[2298]: time="2025-11-15T10:27:38.157218844Z" level=info msg="Starting container: 0ef58fd5a90ee7a6981b544c76026e5556c7883732036cbb3edd93733ccf4a6c" id=3fc6f235-afd4-4baa-a689-7f90162f5495 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:27:38 pause-642487 crio[2298]: time="2025-11-15T10:27:38.157552138Z" level=info msg="Starting container: ad476564fba4d88d6b14f082586a864bcf39dadd4f5371f877dda5662a573cf6" id=a5a9aeb1-bee4-4bc8-ad6a-cc82ad540b1a name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:27:38 pause-642487 crio[2298]: time="2025-11-15T10:27:38.158032545Z" level=info msg="Started container" PID=2521 containerID=602dacd6f75382f88ebd4b70e2712c63d504c2104f09e69e7ed06953540d3bd6 description=kube-system/kindnet-jh5hv/kindnet-cni id=f799f1f8-2434-4890-badf-b336c316cb3e name=/runtime.v1.RuntimeService/StartContainer sandboxID=f3c1c95bc887b75d87a6cfd0ba3641aee12535d352b8f7ec34a007a01a0af04a
	Nov 15 10:27:38 pause-642487 crio[2298]: time="2025-11-15T10:27:38.158183959Z" level=info msg="Started container" PID=2501 containerID=fb8520106a10f5510cb789cf55fa41e2dcef15b4e49b1e8e695b835e5d20c27f description=kube-system/kube-controller-manager-pause-642487/kube-controller-manager id=1e6fd0e0-6786-4792-956a-9982698de34e name=/runtime.v1.RuntimeService/StartContainer sandboxID=344368da8364c593f8715163dc74a517d9801d1d0d212cf235d57311667695c2
	Nov 15 10:27:38 pause-642487 crio[2298]: time="2025-11-15T10:27:38.1601462Z" level=info msg="Created container 3d25753d2c98879b7a05bcb03f07c48e41dbace9c81399d0d21f0952ab2ca92b: kube-system/kube-proxy-jhknt/kube-proxy" id=4ed7f28c-4bf8-4541-b3be-ed25846ea6a6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:27:38 pause-642487 crio[2298]: time="2025-11-15T10:27:38.165264584Z" level=info msg="Started container" PID=2512 containerID=0ef58fd5a90ee7a6981b544c76026e5556c7883732036cbb3edd93733ccf4a6c description=kube-system/kube-apiserver-pause-642487/kube-apiserver id=3fc6f235-afd4-4baa-a689-7f90162f5495 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2bd1e8d174df222901341522574aacefce72ba747d59f30578fe6e69a06f3a21
	Nov 15 10:27:38 pause-642487 crio[2298]: time="2025-11-15T10:27:38.165599262Z" level=info msg="Started container" PID=2509 containerID=ad476564fba4d88d6b14f082586a864bcf39dadd4f5371f877dda5662a573cf6 description=kube-system/etcd-pause-642487/etcd id=a5a9aeb1-bee4-4bc8-ad6a-cc82ad540b1a name=/runtime.v1.RuntimeService/StartContainer sandboxID=49d8d7e8aa7b9cae28f09cefc2873c31c9f72b2bdc72a52acbf2f1258daeb3c8
	Nov 15 10:27:38 pause-642487 crio[2298]: time="2025-11-15T10:27:38.1660251Z" level=info msg="Starting container: 3d25753d2c98879b7a05bcb03f07c48e41dbace9c81399d0d21f0952ab2ca92b" id=c7676357-4a64-4aec-8405-976c3207eeba name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:27:38 pause-642487 crio[2298]: time="2025-11-15T10:27:38.166374336Z" level=info msg="Started container" PID=2528 containerID=79929c6a2bd5c66ee1f199a1d7d3bd6158f0914fb6ede8beb6a05b4c765efed2 description=kube-system/kube-scheduler-pause-642487/kube-scheduler id=e86b8d04-72ce-4b3e-b47f-402ee9eed324 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b77e08415993d81e6ca4abbfd1702e12a8dfc1ba16d575eb54d4485a48ce3af8
	Nov 15 10:27:38 pause-642487 crio[2298]: time="2025-11-15T10:27:38.173426468Z" level=info msg="Started container" PID=2532 containerID=3d25753d2c98879b7a05bcb03f07c48e41dbace9c81399d0d21f0952ab2ca92b description=kube-system/kube-proxy-jhknt/kube-proxy id=c7676357-4a64-4aec-8405-976c3207eeba name=/runtime.v1.RuntimeService/StartContainer sandboxID=7702971120d9b99825817dfaafd32f436e4d52f180683730eacab1fdeff88596
	Nov 15 10:27:38 pause-642487 crio[2298]: time="2025-11-15T10:27:38.173678737Z" level=info msg="Created container a5225c7077c6f279cc61d6fc15cdb423b982b29412dd8a173b4bdc1c976af30b: kube-system/coredns-66bc5c9577-8nbgb/coredns" id=00518024-9c37-40a7-b629-df1a1bc5c9ab name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:27:38 pause-642487 crio[2298]: time="2025-11-15T10:27:38.176054575Z" level=info msg="Starting container: a5225c7077c6f279cc61d6fc15cdb423b982b29412dd8a173b4bdc1c976af30b" id=8192e47d-e4cc-45d9-b907-0f443408cc33 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:27:38 pause-642487 crio[2298]: time="2025-11-15T10:27:38.18312234Z" level=info msg="Started container" PID=2542 containerID=a5225c7077c6f279cc61d6fc15cdb423b982b29412dd8a173b4bdc1c976af30b description=kube-system/coredns-66bc5c9577-8nbgb/coredns id=8192e47d-e4cc-45d9-b907-0f443408cc33 name=/runtime.v1.RuntimeService/StartContainer sandboxID=06739937e202986d610f705d73fe19afa5efb5d7d718935a90480d934a791875
	Nov 15 10:27:48 pause-642487 crio[2298]: time="2025-11-15T10:27:48.657011165Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:27:48 pause-642487 crio[2298]: time="2025-11-15T10:27:48.661775552Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:27:48 pause-642487 crio[2298]: time="2025-11-15T10:27:48.661805824Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:27:48 pause-642487 crio[2298]: time="2025-11-15T10:27:48.661834525Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:27:48 pause-642487 crio[2298]: time="2025-11-15T10:27:48.665441401Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:27:48 pause-642487 crio[2298]: time="2025-11-15T10:27:48.665465166Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:27:48 pause-642487 crio[2298]: time="2025-11-15T10:27:48.665488787Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:27:48 pause-642487 crio[2298]: time="2025-11-15T10:27:48.66901515Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:27:48 pause-642487 crio[2298]: time="2025-11-15T10:27:48.669039883Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:27:48 pause-642487 crio[2298]: time="2025-11-15T10:27:48.669060952Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:27:48 pause-642487 crio[2298]: time="2025-11-15T10:27:48.672343657Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:27:48 pause-642487 crio[2298]: time="2025-11-15T10:27:48.672369997Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	a5225c7077c6f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   24 seconds ago       Running             coredns                   1                   06739937e2029       coredns-66bc5c9577-8nbgb               kube-system
	79929c6a2bd5c       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   24 seconds ago       Running             kube-scheduler            1                   b77e08415993d       kube-scheduler-pause-642487            kube-system
	3d25753d2c988       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   24 seconds ago       Running             kube-proxy                1                   7702971120d9b       kube-proxy-jhknt                       kube-system
	602dacd6f7538       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   24 seconds ago       Running             kindnet-cni               1                   f3c1c95bc887b       kindnet-jh5hv                          kube-system
	0ef58fd5a90ee       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   24 seconds ago       Running             kube-apiserver            1                   2bd1e8d174df2       kube-apiserver-pause-642487            kube-system
	ad476564fba4d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   24 seconds ago       Running             etcd                      1                   49d8d7e8aa7b9       etcd-pause-642487                      kube-system
	fb8520106a10f       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   24 seconds ago       Running             kube-controller-manager   1                   344368da8364c       kube-controller-manager-pause-642487   kube-system
	9cc96cc9a3498       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   36 seconds ago       Exited              coredns                   0                   06739937e2029       coredns-66bc5c9577-8nbgb               kube-system
	8bc8fa4817aeb       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   About a minute ago   Exited              kindnet-cni               0                   f3c1c95bc887b       kindnet-jh5hv                          kube-system
	7816357c42fe9       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   About a minute ago   Exited              kube-proxy                0                   7702971120d9b       kube-proxy-jhknt                       kube-system
	f6ded8d1e4b36       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   About a minute ago   Exited              etcd                      0                   49d8d7e8aa7b9       etcd-pause-642487                      kube-system
	3bc6859c7b004       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   About a minute ago   Exited              kube-controller-manager   0                   344368da8364c       kube-controller-manager-pause-642487   kube-system
	212c7652d9dd7       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   About a minute ago   Exited              kube-apiserver            0                   2bd1e8d174df2       kube-apiserver-pause-642487            kube-system
	4e966f58ab607       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   About a minute ago   Exited              kube-scheduler            0                   b77e08415993d       kube-scheduler-pause-642487            kube-system
	
	
	==> coredns [9cc96cc9a3498c97b9e9f90a49f184e0c4c09a5789e8c22374b75eefad9909ec] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41396 - 39398 "HINFO IN 2315590844852274863.5946995717533965393. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.012707555s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a5225c7077c6f279cc61d6fc15cdb423b982b29412dd8a173b4bdc1c976af30b] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found]
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found]
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38955 - 10052 "HINFO IN 7011222262236218715.6389590110140029362. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.015638808s
	
	
	==> describe nodes <==
	Name:               pause-642487
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-642487
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=pause-642487
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_26_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:26:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-642487
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:27:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:27:25 +0000   Sat, 15 Nov 2025 10:26:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:27:25 +0000   Sat, 15 Nov 2025 10:26:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:27:25 +0000   Sat, 15 Nov 2025 10:26:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:27:25 +0000   Sat, 15 Nov 2025 10:27:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-642487
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                68100b6c-9438-4e63-91cd-fedd50e3a311
	  Boot ID:                    6a33f504-3a8c-4d49-8fc5-301cf5275efc
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-8nbgb                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     79s
	  kube-system                 etcd-pause-642487                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         85s
	  kube-system                 kindnet-jh5hv                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      79s
	  kube-system                 kube-apiserver-pause-642487             250m (3%)     0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-controller-manager-pause-642487    200m (2%)     0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-proxy-jhknt                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-scheduler-pause-642487             100m (1%)     0 (0%)      0 (0%)           0 (0%)         85s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 78s                kube-proxy       
	  Normal   Starting                 18s                kube-proxy       
	  Warning  CgroupV1                 91s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  91s (x8 over 91s)  kubelet          Node pause-642487 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    91s (x8 over 91s)  kubelet          Node pause-642487 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     91s (x8 over 91s)  kubelet          Node pause-642487 status is now: NodeHasSufficientPID
	  Normal   Starting                 85s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 85s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  84s                kubelet          Node pause-642487 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    84s                kubelet          Node pause-642487 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     84s                kubelet          Node pause-642487 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           80s                node-controller  Node pause-642487 event: Registered Node pause-642487 in Controller
	  Normal   NodeReady                38s                kubelet          Node pause-642487 status is now: NodeReady
	  Normal   RegisteredNode           15s                node-controller  Node pause-642487 event: Registered Node pause-642487 in Controller
	
	
	==> dmesg <==
	[Nov15 09:41] kmem.limit_in_bytes is deprecated and will be removed. Writing any value to this file has no effect. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov15 09:44] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[  +1.059558] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[  +1.023907] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[  +1.023868] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[  +1.023912] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[  +1.023925] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[  +2.047814] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[  +4.031639] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[  +8.127259] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[Nov15 09:45] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[ +32.253211] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	
	
	==> etcd [ad476564fba4d88d6b14f082586a864bcf39dadd4f5371f877dda5662a573cf6] <==
	{"level":"warn","ts":"2025-11-15T10:27:44.418507Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"186.240878ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:volume-scheduler\" limit:1 ","response":"range_response_count:1 size:725"}
	{"level":"info","ts":"2025-11-15T10:27:44.418586Z","caller":"traceutil/trace.go:172","msg":"trace[1882963809] linearizableReadLoop","detail":"{readStateIndex:503; appliedIndex:502; }","duration":"126.811592ms","start":"2025-11-15T10:27:44.291763Z","end":"2025-11-15T10:27:44.418575Z","steps":["trace[1882963809] 'read index received'  (duration: 46.42µs)","trace[1882963809] 'applied index is now lower than readState.Index'  (duration: 126.764556ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-15T10:27:44.418593Z","caller":"traceutil/trace.go:172","msg":"trace[539007305] range","detail":"{range_begin:/registry/clusterroles/system:volume-scheduler; range_end:; response_count:1; response_revision:477; }","duration":"186.342414ms","start":"2025-11-15T10:27:44.232238Z","end":"2025-11-15T10:27:44.418581Z","steps":["trace[539007305] 'agreement among raft nodes before linearized reading'  (duration: 59.564425ms)","trace[539007305] 'range keys from in-memory index tree'  (duration: 126.582575ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-15T10:27:44.419257Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"186.604363ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/etcd-pause-642487.1878274a2b4ddd4e\" limit:1 ","response":"range_response_count:1 size:804"}
	{"level":"info","ts":"2025-11-15T10:27:44.419326Z","caller":"traceutil/trace.go:172","msg":"trace[2025804408] range","detail":"{range_begin:/registry/events/kube-system/etcd-pause-642487.1878274a2b4ddd4e; range_end:; response_count:1; response_revision:477; }","duration":"186.666882ms","start":"2025-11-15T10:27:44.232631Z","end":"2025-11-15T10:27:44.419298Z","steps":["trace[2025804408] 'agreement among raft nodes before linearized reading'  (duration: 185.9811ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-15T10:27:44.608881Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.510957ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:certificates.k8s.io:kubelet-serving-approver\" limit:1 ","response":"range_response_count:1 size:684"}
	{"level":"info","ts":"2025-11-15T10:27:44.608949Z","caller":"traceutil/trace.go:172","msg":"trace[1605097997] range","detail":"{range_begin:/registry/clusterroles/system:certificates.k8s.io:kubelet-serving-approver; range_end:; response_count:1; response_revision:478; }","duration":"123.60061ms","start":"2025-11-15T10:27:44.485335Z","end":"2025-11-15T10:27:44.608936Z","steps":["trace[1605097997] 'agreement among raft nodes before linearized reading'  (duration: 62.770687ms)","trace[1605097997] 'range keys from in-memory index tree'  (duration: 60.635785ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-15T10:27:44.608883Z","caller":"traceutil/trace.go:172","msg":"trace[1972206687] transaction","detail":"{read_only:false; response_revision:479; number_of_response:1; }","duration":"186.184585ms","start":"2025-11-15T10:27:44.422678Z","end":"2025-11-15T10:27:44.608863Z","steps":["trace[1972206687] 'process raft request'  (duration: 125.470476ms)","trace[1972206687] 'compare'  (duration: 60.60663ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-15T10:27:44.862883Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"183.550982ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:kube-scheduler\" limit:1 ","response":"range_response_count:1 size:1835"}
	{"level":"warn","ts":"2025-11-15T10:27:44.862928Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.397729ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356660943256300 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-pause-642487.18782749e30878dc\" mod_revision:476 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-pause-642487.18782749e30878dc\" value_size:745 lease:6414984624088480390 >> failure:<request_range:<key:\"/registry/events/kube-system/kube-apiserver-pause-642487.18782749e30878dc\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-15T10:27:44.862970Z","caller":"traceutil/trace.go:172","msg":"trace[1151604462] range","detail":"{range_begin:/registry/clusterroles/system:kube-scheduler; range_end:; response_count:1; response_revision:480; }","duration":"183.628102ms","start":"2025-11-15T10:27:44.679304Z","end":"2025-11-15T10:27:44.862932Z","steps":["trace[1151604462] 'agreement among raft nodes before linearized reading'  (duration: 60.136831ms)","trace[1151604462] 'range keys from in-memory index tree'  (duration: 123.333731ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-15T10:27:44.863036Z","caller":"traceutil/trace.go:172","msg":"trace[152964354] transaction","detail":"{read_only:false; response_revision:481; number_of_response:1; }","duration":"184.96824ms","start":"2025-11-15T10:27:44.678047Z","end":"2025-11-15T10:27:44.863015Z","steps":["trace[152964354] 'process raft request'  (duration: 61.419277ms)","trace[152964354] 'compare'  (duration: 123.317437ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-15T10:27:45.062632Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.756469ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:controller:deployment-controller\" limit:1 ","response":"range_response_count:1 size:915"}
	{"level":"info","ts":"2025-11-15T10:27:45.062655Z","caller":"traceutil/trace.go:172","msg":"trace[1685650576] transaction","detail":"{read_only:false; response_revision:483; number_of_response:1; }","duration":"131.422385ms","start":"2025-11-15T10:27:44.931205Z","end":"2025-11-15T10:27:45.062627Z","steps":["trace[1685650576] 'process raft request'  (duration: 70.371666ms)","trace[1685650576] 'compare'  (duration: 60.923954ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-15T10:27:45.062698Z","caller":"traceutil/trace.go:172","msg":"trace[2098353046] range","detail":"{range_begin:/registry/clusterroles/system:controller:deployment-controller; range_end:; response_count:1; response_revision:482; }","duration":"129.839966ms","start":"2025-11-15T10:27:44.932844Z","end":"2025-11-15T10:27:45.062684Z","steps":["trace[2098353046] 'agreement among raft nodes before linearized reading'  (duration: 68.70765ms)","trace[2098353046] 'range keys from in-memory index tree'  (duration: 60.958742ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-15T10:27:45.343387Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"184.261824ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:controller:expand-controller\" limit:1 ","response":"range_response_count:1 size:801"}
	{"level":"info","ts":"2025-11-15T10:27:45.343467Z","caller":"traceutil/trace.go:172","msg":"trace[354462284] range","detail":"{range_begin:/registry/clusterroles/system:controller:expand-controller; range_end:; response_count:1; response_revision:484; }","duration":"184.354872ms","start":"2025-11-15T10:27:45.159097Z","end":"2025-11-15T10:27:45.343452Z","steps":["trace[354462284] 'agreement among raft nodes before linearized reading'  (duration: 60.737772ms)","trace[354462284] 'range keys from in-memory index tree'  (duration: 123.430916ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-15T10:27:45.343421Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.537669ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356660943256319 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns-66bc5c9577-8nbgb.1878274af3e37f2d\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-66bc5c9577-8nbgb.1878274af3e37f2d\" value_size:733 lease:6414984624088480390 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-15T10:27:45.343583Z","caller":"traceutil/trace.go:172","msg":"trace[357575132] transaction","detail":"{read_only:false; response_revision:485; number_of_response:1; }","duration":"185.507543ms","start":"2025-11-15T10:27:45.158063Z","end":"2025-11-15T10:27:45.343570Z","steps":["trace[357575132] 'process raft request'  (duration: 61.777329ms)","trace[357575132] 'compare'  (duration: 123.43916ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-15T10:27:45.542634Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.135638ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:controller:job-controller\" limit:1 ","response":"range_response_count:1 size:782"}
	{"level":"info","ts":"2025-11-15T10:27:45.542701Z","caller":"traceutil/trace.go:172","msg":"trace[1152570653] range","detail":"{range_begin:/registry/clusterroles/system:controller:job-controller; range_end:; response_count:1; response_revision:486; }","duration":"132.217924ms","start":"2025-11-15T10:27:45.410469Z","end":"2025-11-15T10:27:45.542687Z","steps":["trace[1152570653] 'agreement among raft nodes before linearized reading'  (duration: 73.232546ms)","trace[1152570653] 'range keys from in-memory index tree'  (duration: 58.804489ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-15T10:27:45.542913Z","caller":"traceutil/trace.go:172","msg":"trace[604292703] transaction","detail":"{read_only:false; response_revision:487; number_of_response:1; }","duration":"134.020621ms","start":"2025-11-15T10:27:45.408876Z","end":"2025-11-15T10:27:45.542897Z","steps":["trace[604292703] 'process raft request'  (duration: 74.866601ms)","trace[604292703] 'compare'  (duration: 59.034454ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-15T10:27:45.766930Z","caller":"traceutil/trace.go:172","msg":"trace[1166695952] transaction","detail":"{read_only:false; response_revision:489; number_of_response:1; }","duration":"157.178451ms","start":"2025-11-15T10:27:45.609733Z","end":"2025-11-15T10:27:45.766911Z","steps":["trace[1166695952] 'process raft request'  (duration: 58.97062ms)","trace[1166695952] 'compare'  (duration: 98.057642ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-15T10:27:45.766904Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"156.035557ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:controller:replication-controller\" limit:1 ","response":"range_response_count:1 size:830"}
	{"level":"info","ts":"2025-11-15T10:27:45.767045Z","caller":"traceutil/trace.go:172","msg":"trace[466302424] range","detail":"{range_begin:/registry/clusterroles/system:controller:replication-controller; range_end:; response_count:1; response_revision:488; }","duration":"156.188567ms","start":"2025-11-15T10:27:45.610840Z","end":"2025-11-15T10:27:45.767029Z","steps":["trace[466302424] 'agreement among raft nodes before linearized reading'  (duration: 57.838344ms)","trace[466302424] 'range keys from in-memory index tree'  (duration: 98.09122ms)"],"step_count":2}
	
	
	==> etcd [f6ded8d1e4b3632bdec1a6e539a1f29b3c4950da7cb62c97ca951715e1ec6888] <==
	{"level":"warn","ts":"2025-11-15T10:26:35.533535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:26:35.543811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:26:35.557137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:26:35.572813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:26:35.580742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:26:35.586855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:26:35.627806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57814","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-15T10:27:30.603337Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-15T10:27:30.603431Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-642487","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-11-15T10:27:30.603525Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-15T10:27:32.690707Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-15T10:27:32.690811Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T10:27:32.690870Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-11-15T10:27:32.690924Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-15T10:27:32.690915Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-15T10:27:32.690915Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-15T10:27:32.690988Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-15T10:27:32.691010Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-15T10:27:32.690974Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-15T10:27:32.691033Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T10:27:32.690977Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-15T10:27:32.692632Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-11-15T10:27:32.692706Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T10:27:32.692742Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-15T10:27:32.692756Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-642487","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 10:28:03 up  2:10,  0 user,  load average: 3.18, 1.79, 1.27
	Linux pause-642487 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [602dacd6f75382f88ebd4b70e2712c63d504c2104f09e69e7ed06953540d3bd6] <==
	I1115 10:27:38.360572       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:27:38.361038       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1115 10:27:38.361247       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:27:38.361265       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:27:38.361288       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:27:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:27:38.656876       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:27:38.756577       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:27:38.756619       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:27:38.756746       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1115 10:27:42.562425       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"networkpolicies\" in API group \"networking.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"kindnet\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found]" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1115 10:27:42.562790       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"kindnet\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1115 10:27:42.563251       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"pods\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"kindnet\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found]" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1115 10:27:43.856726       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:27:43.856773       1 metrics.go:72] Registering metrics
	I1115 10:27:43.856852       1 controller.go:711] "Syncing nftables rules"
	I1115 10:27:48.656566       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 10:27:48.656636       1 main.go:301] handling current node
	I1115 10:27:58.658026       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 10:27:58.658060       1 main.go:301] handling current node
	
	
	==> kindnet [8bc8fa4817aeb775ecf14d85b0146af1da01b6d0d4276cb3d4f7730073d8be92] <==
	I1115 10:26:44.956855       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:26:44.958991       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1115 10:26:44.959574       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:26:44.959592       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:26:44.959614       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:26:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:26:45.211811       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:26:45.211833       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:26:45.211845       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:26:45.212185       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1115 10:27:15.212576       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1115 10:27:15.212578       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1115 10:27:15.212609       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1115 10:27:15.212578       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1115 10:27:16.512863       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:27:16.512888       1 metrics.go:72] Registering metrics
	I1115 10:27:16.512999       1 controller.go:711] "Syncing nftables rules"
	I1115 10:27:25.216350       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 10:27:25.216410       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0ef58fd5a90ee7a6981b544c76026e5556c7883732036cbb3edd93733ccf4a6c] <==
	I1115 10:27:42.464236       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1115 10:27:42.464575       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1115 10:27:42.465290       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 10:27:42.464592       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1115 10:27:42.472373       1 aggregator.go:171] initial CRD sync complete...
	I1115 10:27:42.472391       1 autoregister_controller.go:144] Starting autoregister controller
	I1115 10:27:42.472399       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 10:27:42.472405       1 cache.go:39] Caches are synced for autoregister controller
	I1115 10:27:42.480524       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1115 10:27:42.480899       1 policy_source.go:240] refreshing policies
	I1115 10:27:42.562450       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1115 10:27:42.562900       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1115 10:27:42.565637       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1115 10:27:42.569682       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1115 10:27:42.566738       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1115 10:27:42.570877       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1115 10:27:42.572922       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 10:27:42.576919       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1115 10:27:42.830673       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1115 10:27:43.352970       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:27:46.897933       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 10:27:48.244647       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 10:27:48.493469       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:27:48.545580       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 10:27:48.643631       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [212c7652d9dd7a0609b346e2da303aaeeb5bbe32e002bba5a7822c363e37bab1] <==
	W1115 10:27:31.608085       1 logging.go:55] [core] [Channel #9 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.608106       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.608136       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.608141       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.608146       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.608149       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.608151       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.608173       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.608276       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.609486       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.609656       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.609679       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.609694       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.609704       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.609664       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.609786       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.609741       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.609747       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.609883       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.609896       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.609902       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.609909       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.609964       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.609995       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 10:27:31.610104       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [3bc6859c7b0047c536c2b2ef1f446ac2fc769d38f898f17f61d309bf77faba98] <==
	I1115 10:26:43.201635       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 10:26:43.201699       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 10:26:43.201789       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1115 10:26:43.202763       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1115 10:26:43.202813       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1115 10:26:43.202824       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1115 10:26:43.203171       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1115 10:26:43.203347       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1115 10:26:43.203369       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1115 10:26:43.203428       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1115 10:26:43.203457       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1115 10:26:43.204922       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1115 10:26:43.205029       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:26:43.205044       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1115 10:26:43.205081       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1115 10:26:43.205263       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1115 10:26:43.205715       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1115 10:26:43.206652       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 10:26:43.209635       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1115 10:26:43.211775       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1115 10:26:43.300651       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:26:43.300668       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 10:26:43.300674       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1115 10:26:43.310191       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:27:28.156864       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [fb8520106a10f5510cb789cf55fa41e2dcef15b4e49b1e8e695b835e5d20c27f] <==
	I1115 10:27:48.239622       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:27:48.239643       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 10:27:48.239653       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1115 10:27:48.239632       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1115 10:27:48.239813       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1115 10:27:48.239858       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 10:27:48.240252       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1115 10:27:48.240284       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1115 10:27:48.240324       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 10:27:48.240290       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-642487"
	I1115 10:27:48.240506       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1115 10:27:48.241480       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1115 10:27:48.243564       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1115 10:27:48.245180       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1115 10:27:48.245225       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1115 10:27:48.245267       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1115 10:27:48.245283       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1115 10:27:48.245289       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1115 10:27:48.245327       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:27:48.245353       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1115 10:27:48.250907       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1115 10:27:48.254142       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 10:27:48.256366       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 10:27:48.259607       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 10:27:48.263839       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [3d25753d2c98879b7a05bcb03f07c48e41dbace9c81399d0d21f0952ab2ca92b] <==
	I1115 10:27:38.456797       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:27:38.588910       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1115 10:27:42.569318       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"pause-642487\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1115 10:27:44.189480       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:27:44.189515       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1115 10:27:44.189598       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:27:44.208199       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:27:44.208249       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:27:44.213740       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:27:44.214108       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:27:44.214140       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:27:44.215650       1 config.go:200] "Starting service config controller"
	I1115 10:27:44.215676       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:27:44.215685       1 config.go:309] "Starting node config controller"
	I1115 10:27:44.215714       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:27:44.215722       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:27:44.215739       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:27:44.215754       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:27:44.215777       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:27:44.215782       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:27:44.316038       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 10:27:44.316077       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 10:27:44.316106       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [7816357c42fe9c2750e08cc8de3eaa7ada90dbdcaa66ef57679625b7d186d0b6] <==
	I1115 10:26:44.801084       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:26:44.961260       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:26:45.061554       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:26:45.061601       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1115 10:26:45.061715       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:26:45.094386       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:26:45.094466       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:26:45.102862       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:26:45.104412       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:26:45.104498       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:26:45.106564       1 config.go:309] "Starting node config controller"
	I1115 10:26:45.106867       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:26:45.107107       1 config.go:200] "Starting service config controller"
	I1115 10:26:45.107129       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:26:45.107408       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:26:45.107456       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:26:45.108979       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:26:45.109004       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:26:45.207640       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 10:26:45.207737       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 10:26:45.207770       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:26:45.210031       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4e966f58ab607d5bbe97198f865699126b593ea8fce93b6ac7fa51dbf052e144] <==
	E1115 10:26:36.405475       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 10:26:36.405545       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 10:26:36.405617       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 10:26:36.404532       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 10:26:36.405684       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 10:26:36.405691       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 10:26:36.404901       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1115 10:26:36.406465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 10:26:36.406690       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 10:26:37.230174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 10:26:37.243154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 10:26:37.265299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 10:26:37.301613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 10:26:37.303478       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 10:26:37.341544       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1115 10:26:37.376966       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 10:26:37.457178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 10:26:37.484666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1115 10:26:39.100746       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:27:30.604464       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1115 10:27:30.604485       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:27:30.605483       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1115 10:27:30.607067       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1115 10:27:30.607135       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1115 10:27:30.607202       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [79929c6a2bd5c66ee1f199a1d7d3bd6158f0914fb6ede8beb6a05b4c765efed2] <==
	I1115 10:27:39.669714       1 serving.go:386] Generated self-signed cert in-memory
	W1115 10:27:42.374602       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1115 10:27:42.374706       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1115 10:27:42.374740       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1115 10:27:42.374766       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1115 10:27:42.480379       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1115 10:27:42.480464       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:27:42.556141       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:27:42.556278       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:27:42.559073       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 10:27:42.559740       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 10:27:42.657152       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 10:27:38 pause-642487 kubelet[1402]: E1115 10:27:38.017075    1402 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-642487\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="bbc3b022e46d3c086f435225c4e0e99e" pod="kube-system/kube-apiserver-pause-642487"
	Nov 15 10:27:38 pause-642487 kubelet[1402]: E1115 10:27:38.017353    1402 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jhknt\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="66848a4f-7a86-4b64-adb1-2ebb61ff9ddc" pod="kube-system/kube-proxy-jhknt"
	Nov 15 10:27:38 pause-642487 kubelet[1402]: E1115 10:27:38.017642    1402 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-jh5hv\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="17e3aa21-2ac9-4ce2-9a63-54e13281bde5" pod="kube-system/kindnet-jh5hv"
	Nov 15 10:27:38 pause-642487 kubelet[1402]: I1115 10:27:38.017897    1402 scope.go:117] "RemoveContainer" containerID="9cc96cc9a3498c97b9e9f90a49f184e0c4c09a5789e8c22374b75eefad9909ec"
	Nov 15 10:27:38 pause-642487 kubelet[1402]: E1115 10:27:38.017891    1402 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-642487\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="46fb057cdc1d9b8324e561dc35393527" pod="kube-system/kube-scheduler-pause-642487"
	Nov 15 10:27:38 pause-642487 kubelet[1402]: E1115 10:27:38.018230    1402 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-642487\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="9cca22b901a4b9e18450d56334cf0a17" pod="kube-system/kube-controller-manager-pause-642487"
	Nov 15 10:27:38 pause-642487 kubelet[1402]: E1115 10:27:38.018507    1402 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-642487\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="bbc3b022e46d3c086f435225c4e0e99e" pod="kube-system/kube-apiserver-pause-642487"
	Nov 15 10:27:38 pause-642487 kubelet[1402]: E1115 10:27:38.018765    1402 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jhknt\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="66848a4f-7a86-4b64-adb1-2ebb61ff9ddc" pod="kube-system/kube-proxy-jhknt"
	Nov 15 10:27:38 pause-642487 kubelet[1402]: E1115 10:27:38.019006    1402 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-jh5hv\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="17e3aa21-2ac9-4ce2-9a63-54e13281bde5" pod="kube-system/kindnet-jh5hv"
	Nov 15 10:27:38 pause-642487 kubelet[1402]: E1115 10:27:38.019317    1402 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-8nbgb\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="9f4b3526-3889-4bc5-81e0-cbab60c70c2d" pod="kube-system/coredns-66bc5c9577-8nbgb"
	Nov 15 10:27:38 pause-642487 kubelet[1402]: E1115 10:27:38.019580    1402 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-642487\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="46fb057cdc1d9b8324e561dc35393527" pod="kube-system/kube-scheduler-pause-642487"
	Nov 15 10:27:38 pause-642487 kubelet[1402]: E1115 10:27:38.019849    1402 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-642487\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="26a9d5105ab43ff6347035d744bf86c1" pod="kube-system/etcd-pause-642487"
	Nov 15 10:27:42 pause-642487 kubelet[1402]: E1115 10:27:42.272870    1402 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-642487\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-642487' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Nov 15 10:27:42 pause-642487 kubelet[1402]: E1115 10:27:42.272887    1402 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-642487\" is forbidden: User \"system:node:pause-642487\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-642487' and this object" podUID="46fb057cdc1d9b8324e561dc35393527" pod="kube-system/kube-scheduler-pause-642487"
	Nov 15 10:27:42 pause-642487 kubelet[1402]: E1115 10:27:42.274200    1402 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-642487\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-642487' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 15 10:27:42 pause-642487 kubelet[1402]: E1115 10:27:42.381611    1402 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-642487\" is forbidden: User \"system:node:pause-642487\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-642487' and this object" podUID="26a9d5105ab43ff6347035d744bf86c1" pod="kube-system/etcd-pause-642487"
	Nov 15 10:27:42 pause-642487 kubelet[1402]: E1115 10:27:42.456134    1402 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-642487\" is forbidden: User \"system:node:pause-642487\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-642487' and this object" podUID="9cca22b901a4b9e18450d56334cf0a17" pod="kube-system/kube-controller-manager-pause-642487"
	Nov 15 10:27:42 pause-642487 kubelet[1402]: E1115 10:27:42.459799    1402 status_manager.go:1018] "Failed to get status for pod" err=<
	Nov 15 10:27:42 pause-642487 kubelet[1402]:         pods "kube-apiserver-pause-642487" is forbidden: User "system:node:pause-642487" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-642487' and this object
	Nov 15 10:27:42 pause-642487 kubelet[1402]:         RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found]
	Nov 15 10:27:42 pause-642487 kubelet[1402]:  > podUID="bbc3b022e46d3c086f435225c4e0e99e" pod="kube-system/kube-apiserver-pause-642487"
	Nov 15 10:27:49 pause-642487 kubelet[1402]: W1115 10:27:49.076185    1402 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 15 10:27:57 pause-642487 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 10:27:57 pause-642487 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 10:27:57 pause-642487 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-642487 -n pause-642487
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-642487 -n pause-642487: exit status 2 (358.261465ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-642487 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (7.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-087235 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-087235 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (263.546355ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:34:27Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-087235 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-087235 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-087235 describe deploy/metrics-server -n kube-system: exit status 1 (60.211516ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-087235 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-087235
helpers_test.go:243: (dbg) docker inspect old-k8s-version-087235:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3d4715b4872d2f4603f13f0191a01c4322a26432e1316c2ae62b1c571bea1814",
	        "Created": "2025-11-15T10:33:24.829295884Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 340815,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:33:24.866601325Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/3d4715b4872d2f4603f13f0191a01c4322a26432e1316c2ae62b1c571bea1814/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3d4715b4872d2f4603f13f0191a01c4322a26432e1316c2ae62b1c571bea1814/hostname",
	        "HostsPath": "/var/lib/docker/containers/3d4715b4872d2f4603f13f0191a01c4322a26432e1316c2ae62b1c571bea1814/hosts",
	        "LogPath": "/var/lib/docker/containers/3d4715b4872d2f4603f13f0191a01c4322a26432e1316c2ae62b1c571bea1814/3d4715b4872d2f4603f13f0191a01c4322a26432e1316c2ae62b1c571bea1814-json.log",
	        "Name": "/old-k8s-version-087235",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-087235:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-087235",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3d4715b4872d2f4603f13f0191a01c4322a26432e1316c2ae62b1c571bea1814",
	                "LowerDir": "/var/lib/docker/overlay2/2a7fcfc651315e1f255f5887dcd49b9489c6d9bc0d5bb2729e48f789bfad9233-init/diff:/var/lib/docker/overlay2/507c85b6fb43d9c98216fac79d7a8d08dd20b2a63d0fbfc758336e5d38c04044/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2a7fcfc651315e1f255f5887dcd49b9489c6d9bc0d5bb2729e48f789bfad9233/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2a7fcfc651315e1f255f5887dcd49b9489c6d9bc0d5bb2729e48f789bfad9233/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2a7fcfc651315e1f255f5887dcd49b9489c6d9bc0d5bb2729e48f789bfad9233/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-087235",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-087235/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-087235",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-087235",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-087235",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "8b5a3fb22bdfc2059b8ff3640164434c3e76b4bc0907a7219b4b15bbd24c9884",
	            "SandboxKey": "/var/run/docker/netns/8b5a3fb22bdf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-087235": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "11bae6d0a5454f5603cad7765ca7366f9be46b927618f2c698dc454d778aa49c",
	                    "EndpointID": "14bb36cc100f3fa2e2b408693f0aa000ea3b0a30b7d53beaffcbee381a5e1416",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "f2:c5:5d:18:53:60",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-087235",
	                        "3d4715b4872d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-087235 -n old-k8s-version-087235
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-087235 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-087235 logs -n 25: (1.201506001s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                     ARGS                                                                     │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p flannel-931243 sudo crictl ps --all                                                                                                       │ flannel-931243         │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 pgrep -a kubelet                                                                                                            │ bridge-931243          │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p flannel-931243 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                                                │ flannel-931243         │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p flannel-931243 sudo ip a s                                                                                                                │ flannel-931243         │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p flannel-931243 sudo ip r s                                                                                                                │ flannel-931243         │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p flannel-931243 sudo iptables-save                                                                                                         │ flannel-931243         │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p flannel-931243 sudo iptables -t nat -L -n -v                                                                                              │ flannel-931243         │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p flannel-931243 sudo cat /run/flannel/subnet.env                                                                                           │ flannel-931243         │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p flannel-931243 sudo cat /etc/kube-flannel/cni-conf.json                                                                                   │ flannel-931243         │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ ssh     │ -p flannel-931243 sudo systemctl status kubelet --all --full --no-pager                                                                      │ flannel-931243         │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p flannel-931243 sudo systemctl cat kubelet --no-pager                                                                                      │ flannel-931243         │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p flannel-931243 sudo journalctl -xeu kubelet --all --full --no-pager                                                                       │ flannel-931243         │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p flannel-931243 sudo cat /etc/kubernetes/kubelet.conf                                                                                      │ flannel-931243         │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p flannel-931243 sudo cat /var/lib/kubelet/config.yaml                                                                                      │ flannel-931243         │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p flannel-931243 sudo systemctl status docker --all --full --no-pager                                                                       │ flannel-931243         │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ ssh     │ -p flannel-931243 sudo systemctl cat docker --no-pager                                                                                       │ flannel-931243         │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p flannel-931243 sudo cat /etc/docker/daemon.json                                                                                           │ flannel-931243         │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ ssh     │ -p flannel-931243 sudo docker system info                                                                                                    │ flannel-931243         │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ ssh     │ -p flannel-931243 sudo systemctl status cri-docker --all --full --no-pager                                                                   │ flannel-931243         │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ ssh     │ -p flannel-931243 sudo systemctl cat cri-docker --no-pager                                                                                   │ flannel-931243         │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p flannel-931243 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                              │ flannel-931243         │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-087235 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ old-k8s-version-087235 │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ ssh     │ -p flannel-931243 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                        │ flannel-931243         │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p flannel-931243 sudo cri-dockerd --version                                                                                                 │ flannel-931243         │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p flannel-931243 sudo systemctl status containerd --all --full --no-pager                                                                   │ flannel-931243         │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:33:33
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:33:33.252200  344850 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:33:33.252526  344850 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:33:33.252536  344850 out.go:374] Setting ErrFile to fd 2...
	I1115 10:33:33.252541  344850 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:33:33.252758  344850 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 10:33:33.253320  344850 out.go:368] Setting JSON to false
	I1115 10:33:33.254491  344850 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8150,"bootTime":1763194663,"procs":270,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 10:33:33.254591  344850 start.go:143] virtualization: kvm guest
	I1115 10:33:33.257562  344850 out.go:179] * [no-preload-283677] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 10:33:33.258722  344850 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 10:33:33.258741  344850 notify.go:221] Checking for updates...
	I1115 10:33:33.261475  344850 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:33:33.262675  344850 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:33:33.263651  344850 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-55448/.minikube
	I1115 10:33:33.264718  344850 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 10:33:33.265904  344850 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:33:31.918440  334883 out.go:252]   - Booting up control plane ...
	I1115 10:33:31.918575  334883 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 10:33:31.918684  334883 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 10:33:31.919628  334883 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 10:33:31.934491  334883 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 10:33:31.934666  334883 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1115 10:33:31.942378  334883 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1115 10:33:31.942643  334883 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 10:33:31.942684  334883 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 10:33:32.076302  334883 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1115 10:33:32.076451  334883 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1115 10:33:33.079899  334883 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.003490158s
	I1115 10:33:33.084676  334883 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 10:33:33.084806  334883 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1115 10:33:33.084997  334883 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 10:33:33.085155  334883 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1115 10:33:33.267659  344850 config.go:182] Loaded profile config "bridge-931243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:33:33.267742  344850 config.go:182] Loaded profile config "flannel-931243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:33:33.267818  344850 config.go:182] Loaded profile config "old-k8s-version-087235": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1115 10:33:33.267912  344850 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:33:33.294587  344850 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 10:33:33.294738  344850 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:33:33.360024  344850 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:true NGoroutines:77 SystemTime:2025-11-15 10:33:33.348034181 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:33:33.360182  344850 docker.go:319] overlay module found
	I1115 10:33:33.362099  344850 out.go:179] * Using the docker driver based on user configuration
	I1115 10:33:33.363336  344850 start.go:309] selected driver: docker
	I1115 10:33:33.363352  344850 start.go:930] validating driver "docker" against <nil>
	I1115 10:33:33.363364  344850 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:33:33.364253  344850 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:33:33.428270  344850 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:true NGoroutines:77 SystemTime:2025-11-15 10:33:33.417190014 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:33:33.428426  344850 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 10:33:33.428626  344850 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:33:33.434076  344850 out.go:179] * Using Docker driver with root privileges
	I1115 10:33:33.435294  344850 cni.go:84] Creating CNI manager for ""
	I1115 10:33:33.435361  344850 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:33:33.435375  344850 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 10:33:33.435467  344850 start.go:353] cluster config:
	{Name:no-preload-283677 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-283677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:33:33.437341  344850 out.go:179] * Starting "no-preload-283677" primary control-plane node in "no-preload-283677" cluster
	I1115 10:33:33.438671  344850 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:33:33.440081  344850 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:33:33.441268  344850 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:33:33.441362  344850 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:33:33.441412  344850 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/config.json ...
	I1115 10:33:33.441453  344850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/config.json: {Name:mk8cb6b8af1580655185ee4612ff3de6a8081ae3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:33:33.441604  344850 cache.go:107] acquiring lock: {Name:mk04e19ef4726336e87a2efa989ec89b11194587 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:33:33.441611  344850 cache.go:107] acquiring lock: {Name:mkebd0527ca8cd5425c0189738c4c613b1d0ad77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:33:33.441658  344850 cache.go:107] acquiring lock: {Name:mk160c40720b01bd77226b9ee86c8a56493b3987 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:33:33.441621  344850 cache.go:107] acquiring lock: {Name:mk568a3320f172c7702e0c64f82e9ab66f08dc56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:33:33.441678  344850 cache.go:107] acquiring lock: {Name:mk4538f0a5ff75ff8439835bfd59d64a365cd71b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:33:33.441669  344850 cache.go:107] acquiring lock: {Name:mk6d25d7926738a8037e85ed094d1b802d5c1f77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:33:33.441716  344850 cache.go:107] acquiring lock: {Name:mkc6ed1fa15fd637355ac953d6d06e91f3f34a59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:33:33.441725  344850 cache.go:107] acquiring lock: {Name:mk5c9d9d1f91519c0468e055d96da9be78d8987d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:33:33.441770  344850 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1115 10:33:33.441795  344850 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1115 10:33:33.441831  344850 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1115 10:33:33.441841  344850 cache.go:115] /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1115 10:33:33.441872  344850 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 282.746µs
	I1115 10:33:33.441886  344850 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1115 10:33:33.441886  344850 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1115 10:33:33.441891  344850 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1115 10:33:33.441832  344850 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1115 10:33:33.442504  344850 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 10:33:33.443371  344850 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1115 10:33:33.443442  344850 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1115 10:33:33.443495  344850 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1115 10:33:33.443373  344850 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1115 10:33:33.443374  344850 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1115 10:33:33.443765  344850 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 10:33:33.443920  344850 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1115 10:33:33.466229  344850 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:33:33.466249  344850 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:33:33.466264  344850 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:33:33.466291  344850 start.go:360] acquireMachinesLock for no-preload-283677: {Name:mk8d9dc816de84055c03b404ddcac096c332be5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:33:33.466407  344850 start.go:364] duration metric: took 92.843µs to acquireMachinesLock for "no-preload-283677"
	I1115 10:33:33.466435  344850 start.go:93] Provisioning new machine with config: &{Name:no-preload-283677 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-283677 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:33:33.466522  344850 start.go:125] createHost starting for "" (driver="docker")
	I1115 10:33:32.990700  329575 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:33.490362  329575 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:33.991179  329575 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:34.490835  329575 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:34.990542  329575 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:35.490429  329575 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:35.991283  329575 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:36.080222  329575 kubeadm.go:1114] duration metric: took 4.248831513s to wait for elevateKubeSystemPrivileges
	I1115 10:33:36.080264  329575 kubeadm.go:403] duration metric: took 16.675737196s to StartCluster
	I1115 10:33:36.080288  329575 settings.go:142] acquiring lock: {Name:mk94cfac0f6eef2479181468a7a8082a6e5c8f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:33:36.080371  329575 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:33:36.081379  329575 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:33:36.081645  329575 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 10:33:36.081669  329575 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:33:36.081647  329575 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:33:36.081744  329575 addons.go:70] Setting storage-provisioner=true in profile "flannel-931243"
	I1115 10:33:36.081877  329575 addons.go:70] Setting default-storageclass=true in profile "flannel-931243"
	I1115 10:33:36.081896  329575 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "flannel-931243"
	I1115 10:33:36.081760  329575 addons.go:239] Setting addon storage-provisioner=true in "flannel-931243"
	I1115 10:33:36.081950  329575 config.go:182] Loaded profile config "flannel-931243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:33:36.082006  329575 host.go:66] Checking if "flannel-931243" exists ...
	I1115 10:33:36.082339  329575 cli_runner.go:164] Run: docker container inspect flannel-931243 --format={{.State.Status}}
	I1115 10:33:36.082533  329575 cli_runner.go:164] Run: docker container inspect flannel-931243 --format={{.State.Status}}
	I1115 10:33:36.083482  329575 out.go:179] * Verifying Kubernetes components...
	I1115 10:33:36.084652  329575 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:33:36.112301  329575 addons.go:239] Setting addon default-storageclass=true in "flannel-931243"
	I1115 10:33:36.112361  329575 host.go:66] Checking if "flannel-931243" exists ...
	I1115 10:33:36.112937  329575 cli_runner.go:164] Run: docker container inspect flannel-931243 --format={{.State.Status}}
	I1115 10:33:36.115843  329575 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:33:36.117172  329575 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:33:36.117192  329575 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:33:36.117253  329575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-931243
	I1115 10:33:36.143662  329575 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:33:36.143805  329575 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:33:36.143911  329575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-931243
	I1115 10:33:36.145912  329575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/flannel-931243/id_rsa Username:docker}
	I1115 10:33:36.174213  329575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/flannel-931243/id_rsa Username:docker}
	I1115 10:33:36.378581  329575 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 10:33:36.480747  329575 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:33:36.586388  329575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:33:36.599473  329575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:33:37.092998  329575 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1115 10:33:37.093894  329575 node_ready.go:35] waiting up to 15m0s for node "flannel-931243" to be "Ready" ...
	I1115 10:33:37.334015  329575 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1115 10:33:37.335277  329575 addons.go:515] duration metric: took 1.253600558s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1115 10:33:37.596471  329575 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-931243" context rescaled to 1 replicas
	I1115 10:33:33.250667  337023 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/old-k8s-version-087235/proxy-client.crt ...
	I1115 10:33:33.250709  337023 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/old-k8s-version-087235/proxy-client.crt: {Name:mk3c69966d3bad37f4c826013781c8fa0eca5c60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:33:33.250909  337023 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/old-k8s-version-087235/proxy-client.key ...
	I1115 10:33:33.250926  337023 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/old-k8s-version-087235/proxy-client.key: {Name:mk7ecdaaedcc249be9ac9ffff27702a66d0da882 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:33:33.251116  337023 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem (1338 bytes)
	W1115 10:33:33.251159  337023 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962_empty.pem, impossibly tiny 0 bytes
	I1115 10:33:33.251170  337023 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:33:33.251190  337023 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:33:33.251212  337023 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:33:33.251234  337023 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem (1679 bytes)
	I1115 10:33:33.251271  337023 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:33:33.251817  337023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:33:33.272943  337023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:33:33.293052  337023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:33:33.312974  337023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:33:33.336040  337023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/old-k8s-version-087235/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1115 10:33:33.358831  337023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/old-k8s-version-087235/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 10:33:33.378278  337023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/old-k8s-version-087235/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:33:33.400610  337023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/old-k8s-version-087235/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 10:33:33.421919  337023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem --> /usr/share/ca-certificates/58962.pem (1338 bytes)
	I1115 10:33:33.442107  337023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /usr/share/ca-certificates/589622.pem (1708 bytes)
	I1115 10:33:33.464825  337023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:33:33.488114  337023 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:33:33.503790  337023 ssh_runner.go:195] Run: openssl version
	I1115 10:33:33.514316  337023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/589622.pem && ln -fs /usr/share/ca-certificates/589622.pem /etc/ssl/certs/589622.pem"
	I1115 10:33:33.525673  337023 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/589622.pem
	I1115 10:33:33.530772  337023 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:47 /usr/share/ca-certificates/589622.pem
	I1115 10:33:33.530831  337023 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/589622.pem
	I1115 10:33:33.588278  337023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/589622.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:33:33.600357  337023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:33:33.610403  337023 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:33:33.614802  337023 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:41 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:33:33.614854  337023 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:33:33.660353  337023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:33:33.673236  337023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/58962.pem && ln -fs /usr/share/ca-certificates/58962.pem /etc/ssl/certs/58962.pem"
	I1115 10:33:33.685784  337023 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/58962.pem
	I1115 10:33:33.690365  337023 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:47 /usr/share/ca-certificates/58962.pem
	I1115 10:33:33.690442  337023 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/58962.pem
	I1115 10:33:33.739344  337023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/58962.pem /etc/ssl/certs/51391683.0"
	I1115 10:33:33.754147  337023 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:33:33.760824  337023 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 10:33:33.761032  337023 kubeadm.go:401] StartCluster: {Name:old-k8s-version-087235 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-087235 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:33:33.761145  337023 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:33:33.761518  337023 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:33:33.810125  337023 cri.go:89] found id: ""
	I1115 10:33:33.810201  337023 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:33:33.818774  337023 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 10:33:33.826651  337023 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 10:33:33.826714  337023 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 10:33:33.835204  337023 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 10:33:33.835226  337023 kubeadm.go:158] found existing configuration files:
	
	I1115 10:33:33.835326  337023 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 10:33:33.847118  337023 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 10:33:33.847186  337023 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 10:33:33.857146  337023 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 10:33:33.869151  337023 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 10:33:33.869236  337023 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 10:33:33.881010  337023 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 10:33:33.891667  337023 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 10:33:33.891734  337023 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 10:33:33.903357  337023 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 10:33:33.914054  337023 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 10:33:33.914203  337023 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 10:33:33.924413  337023 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 10:33:34.045558  337023 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1115 10:33:34.150473  337023 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 10:33:33.468506  344850 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 10:33:33.468682  344850 start.go:159] libmachine.API.Create for "no-preload-283677" (driver="docker")
	I1115 10:33:33.468708  344850 client.go:173] LocalClient.Create starting
	I1115 10:33:33.468762  344850 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem
	I1115 10:33:33.468789  344850 main.go:143] libmachine: Decoding PEM data...
	I1115 10:33:33.468807  344850 main.go:143] libmachine: Parsing certificate...
	I1115 10:33:33.468852  344850 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem
	I1115 10:33:33.468870  344850 main.go:143] libmachine: Decoding PEM data...
	I1115 10:33:33.468880  344850 main.go:143] libmachine: Parsing certificate...
	I1115 10:33:33.469230  344850 cli_runner.go:164] Run: docker network inspect no-preload-283677 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 10:33:33.490328  344850 cli_runner.go:211] docker network inspect no-preload-283677 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 10:33:33.490412  344850 network_create.go:284] running [docker network inspect no-preload-283677] to gather additional debugging logs...
	I1115 10:33:33.490441  344850 cli_runner.go:164] Run: docker network inspect no-preload-283677
	W1115 10:33:33.513134  344850 cli_runner.go:211] docker network inspect no-preload-283677 returned with exit code 1
	I1115 10:33:33.513166  344850 network_create.go:287] error running [docker network inspect no-preload-283677]: docker network inspect no-preload-283677: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-283677 not found
	I1115 10:33:33.513180  344850 network_create.go:289] output of [docker network inspect no-preload-283677]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-283677 not found
	
	** /stderr **
	I1115 10:33:33.513275  344850 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:33:33.538315  344850 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-77644897380e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:42:f9:e2:83:c3:91} reservation:<nil>}
	I1115 10:33:33.538947  344850 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e808d03b2052 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:03:24:ef:92:a1} reservation:<nil>}
	I1115 10:33:33.539657  344850 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3fb45eeaa66e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:1a:b3:b7:0f:2e:99} reservation:<nil>}
	I1115 10:33:33.540587  344850 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d1b510}
	I1115 10:33:33.540646  344850 network_create.go:124] attempt to create docker network no-preload-283677 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1115 10:33:33.540704  344850 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-283677 no-preload-283677
	I1115 10:33:33.604406  344850 cache.go:162] opening:  /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1115 10:33:33.609124  344850 cache.go:162] opening:  /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1115 10:33:33.609469  344850 cache.go:162] opening:  /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1115 10:33:33.610835  344850 network_create.go:108] docker network no-preload-283677 192.168.76.0/24 created
	I1115 10:33:33.610863  344850 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-283677" container
	I1115 10:33:33.610930  344850 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 10:33:33.615413  344850 cache.go:162] opening:  /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1115 10:33:33.615885  344850 cache.go:162] opening:  /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1115 10:33:33.631208  344850 cli_runner.go:164] Run: docker volume create no-preload-283677 --label name.minikube.sigs.k8s.io=no-preload-283677 --label created_by.minikube.sigs.k8s.io=true
	I1115 10:33:33.633464  344850 cache.go:162] opening:  /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1115 10:33:33.646271  344850 cache.go:162] opening:  /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1115 10:33:33.656365  344850 oci.go:103] Successfully created a docker volume no-preload-283677
	I1115 10:33:33.656463  344850 cli_runner.go:164] Run: docker run --rm --name no-preload-283677-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-283677 --entrypoint /usr/bin/test -v no-preload-283677:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 10:33:33.709920  344850 cache.go:157] /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1115 10:33:33.709950  344850 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 268.279072ms
	I1115 10:33:33.709985  344850 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1115 10:33:33.954105  344850 cache.go:157] /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1115 10:33:33.954138  344850 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 512.547972ms
	I1115 10:33:33.954157  344850 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1115 10:33:34.148943  344850 oci.go:107] Successfully prepared a docker volume no-preload-283677
	I1115 10:33:34.149005  344850 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1115 10:33:34.149138  344850 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1115 10:33:34.149250  344850 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 10:33:34.228977  344850 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-283677 --name no-preload-283677 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-283677 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-283677 --network no-preload-283677 --ip 192.168.76.2 --volume no-preload-283677:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 10:33:34.592252  344850 cli_runner.go:164] Run: docker container inspect no-preload-283677 --format={{.State.Running}}
	I1115 10:33:34.617276  344850 cli_runner.go:164] Run: docker container inspect no-preload-283677 --format={{.State.Status}}
	I1115 10:33:34.637810  344850 cli_runner.go:164] Run: docker exec no-preload-283677 stat /var/lib/dpkg/alternatives/iptables
	I1115 10:33:34.712397  344850 oci.go:144] the created container "no-preload-283677" has a running status.
	I1115 10:33:34.712448  344850 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/no-preload-283677/id_rsa...
	I1115 10:33:34.832311  344850 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21894-55448/.minikube/machines/no-preload-283677/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 10:33:34.870551  344850 cli_runner.go:164] Run: docker container inspect no-preload-283677 --format={{.State.Status}}
	I1115 10:33:34.910401  344850 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 10:33:34.910577  344850 kic_runner.go:114] Args: [docker exec --privileged no-preload-283677 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 10:33:34.924765  344850 cache.go:157] /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1115 10:33:34.924805  344850 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.483127605s
	I1115 10:33:34.924825  344850 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1115 10:33:34.977933  344850 cli_runner.go:164] Run: docker container inspect no-preload-283677 --format={{.State.Status}}
	I1115 10:33:35.006878  344850 machine.go:94] provisionDockerMachine start ...
	I1115 10:33:35.007011  344850 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:33:35.024801  344850 cache.go:157] /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1115 10:33:35.024839  344850 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.583174892s
	I1115 10:33:35.024859  344850 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1115 10:33:35.038154  344850 main.go:143] libmachine: Using SSH client type: native
	I1115 10:33:35.038545  344850 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33094 <nil> <nil>}
	I1115 10:33:35.038565  344850 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:33:35.039248  344850 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58546->127.0.0.1:33094: read: connection reset by peer
	I1115 10:33:35.088056  344850 cache.go:157] /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1115 10:33:35.088100  344850 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.646502941s
	I1115 10:33:35.088116  344850 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1115 10:33:35.096867  344850 cache.go:157] /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1115 10:33:35.096899  344850 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.655240926s
	I1115 10:33:35.096916  344850 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1115 10:33:35.579856  344850 cache.go:157] /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1115 10:33:35.579898  344850 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 2.138229072s
	I1115 10:33:35.579914  344850 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1115 10:33:35.579936  344850 cache.go:87] Successfully saved all images to host disk.
	I1115 10:33:38.182822  344850 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-283677
	
	I1115 10:33:38.182854  344850 ubuntu.go:182] provisioning hostname "no-preload-283677"
	I1115 10:33:38.182977  344850 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:33:38.209380  344850 main.go:143] libmachine: Using SSH client type: native
	I1115 10:33:38.209701  344850 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33094 <nil> <nil>}
	I1115 10:33:38.209718  344850 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-283677 && echo "no-preload-283677" | sudo tee /etc/hostname
	I1115 10:33:35.156407  334883 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.071900241s
	I1115 10:33:37.246758  334883 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.16253064s
	I1115 10:33:38.587265  334883 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.502782165s
	I1115 10:33:38.601063  334883 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 10:33:38.613775  334883 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 10:33:38.625069  334883 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 10:33:38.625371  334883 kubeadm.go:319] [mark-control-plane] Marking the node bridge-931243 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 10:33:38.634835  334883 kubeadm.go:319] [bootstrap-token] Using token: h71tvd.ftpgwidrt9zxr3ez
	I1115 10:33:38.636013  334883 out.go:252]   - Configuring RBAC rules ...
	I1115 10:33:38.636160  334883 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 10:33:38.640862  334883 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 10:33:38.648226  334883 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 10:33:38.650843  334883 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 10:33:38.653324  334883 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 10:33:38.655867  334883 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 10:33:38.994303  334883 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 10:33:39.414141  334883 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 10:33:39.994611  334883 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 10:33:39.995545  334883 kubeadm.go:319] 
	I1115 10:33:39.995640  334883 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 10:33:39.995655  334883 kubeadm.go:319] 
	I1115 10:33:39.995751  334883 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 10:33:39.995762  334883 kubeadm.go:319] 
	I1115 10:33:39.995785  334883 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 10:33:39.995875  334883 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 10:33:39.995974  334883 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 10:33:39.995988  334883 kubeadm.go:319] 
	I1115 10:33:39.996073  334883 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 10:33:39.996092  334883 kubeadm.go:319] 
	I1115 10:33:39.996151  334883 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 10:33:39.996162  334883 kubeadm.go:319] 
	I1115 10:33:39.996234  334883 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 10:33:39.996315  334883 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 10:33:39.996421  334883 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 10:33:39.996432  334883 kubeadm.go:319] 
	I1115 10:33:39.996568  334883 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 10:33:39.996685  334883 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 10:33:39.996695  334883 kubeadm.go:319] 
	I1115 10:33:39.996804  334883 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token h71tvd.ftpgwidrt9zxr3ez \
	I1115 10:33:39.996905  334883 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c958101d3a67868123c5e439a6c1ea6e30a99a6538b76cbe461e2c919cf1aec \
	I1115 10:33:39.996981  334883 kubeadm.go:319] 	--control-plane 
	I1115 10:33:39.996992  334883 kubeadm.go:319] 
	I1115 10:33:39.997097  334883 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 10:33:39.997113  334883 kubeadm.go:319] 
	I1115 10:33:39.997214  334883 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token h71tvd.ftpgwidrt9zxr3ez \
	I1115 10:33:39.997337  334883 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c958101d3a67868123c5e439a6c1ea6e30a99a6538b76cbe461e2c919cf1aec 
	I1115 10:33:39.999768  334883 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1115 10:33:40.000072  334883 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1115 10:33:40.000210  334883 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 10:33:40.000239  334883 cni.go:84] Creating CNI manager for "bridge"
	I1115 10:33:40.002022  334883 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1115 10:33:38.364405  344850 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-283677
	
	I1115 10:33:38.364509  344850 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:33:38.390330  344850 main.go:143] libmachine: Using SSH client type: native
	I1115 10:33:38.390612  344850 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33094 <nil> <nil>}
	I1115 10:33:38.390631  344850 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-283677' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-283677/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-283677' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:33:38.533338  344850 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:33:38.533380  344850 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-55448/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-55448/.minikube}
	I1115 10:33:38.533408  344850 ubuntu.go:190] setting up certificates
	I1115 10:33:38.533422  344850 provision.go:84] configureAuth start
	I1115 10:33:38.533489  344850 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-283677
	I1115 10:33:38.555524  344850 provision.go:143] copyHostCerts
	I1115 10:33:38.555634  344850 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem, removing ...
	I1115 10:33:38.555658  344850 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem
	I1115 10:33:38.555748  344850 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem (1082 bytes)
	I1115 10:33:38.555883  344850 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem, removing ...
	I1115 10:33:38.555910  344850 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem
	I1115 10:33:38.555994  344850 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem (1123 bytes)
	I1115 10:33:38.556091  344850 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem, removing ...
	I1115 10:33:38.556102  344850 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem
	I1115 10:33:38.556146  344850 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem (1679 bytes)
	I1115 10:33:38.556225  344850 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem org=jenkins.no-preload-283677 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-283677]
	I1115 10:33:38.733983  344850 provision.go:177] copyRemoteCerts
	I1115 10:33:38.734068  344850 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:33:38.734131  344850 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:33:38.760843  344850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/no-preload-283677/id_rsa Username:docker}
	I1115 10:33:38.860321  344850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:33:38.880106  344850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1115 10:33:38.898560  344850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:33:38.916822  344850 provision.go:87] duration metric: took 383.386042ms to configureAuth
	I1115 10:33:38.916850  344850 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:33:38.917063  344850 config.go:182] Loaded profile config "no-preload-283677": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:33:38.917172  344850 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:33:38.936471  344850 main.go:143] libmachine: Using SSH client type: native
	I1115 10:33:38.936676  344850 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33094 <nil> <nil>}
	I1115 10:33:38.936691  344850 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:33:39.212763  344850 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:33:39.212790  344850 machine.go:97] duration metric: took 4.205878651s to provisionDockerMachine
	I1115 10:33:39.212803  344850 client.go:176] duration metric: took 5.744087779s to LocalClient.Create
	I1115 10:33:39.212828  344850 start.go:167] duration metric: took 5.744145481s to libmachine.API.Create "no-preload-283677"
	I1115 10:33:39.212837  344850 start.go:293] postStartSetup for "no-preload-283677" (driver="docker")
	I1115 10:33:39.212848  344850 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:33:39.212915  344850 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:33:39.212995  344850 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:33:39.240697  344850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/no-preload-283677/id_rsa Username:docker}
	I1115 10:33:39.359468  344850 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:33:39.365595  344850 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:33:39.365693  344850 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:33:39.365723  344850 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/addons for local assets ...
	I1115 10:33:39.365811  344850 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/files for local assets ...
	I1115 10:33:39.365943  344850 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem -> 589622.pem in /etc/ssl/certs
	I1115 10:33:39.366098  344850 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:33:39.378260  344850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:33:39.406638  344850 start.go:296] duration metric: took 193.785475ms for postStartSetup
	I1115 10:33:39.407116  344850 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-283677
	I1115 10:33:39.427888  344850 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/config.json ...
	I1115 10:33:39.428153  344850 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:33:39.428197  344850 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:33:39.451214  344850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/no-preload-283677/id_rsa Username:docker}
	I1115 10:33:39.548453  344850 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:33:39.553900  344850 start.go:128] duration metric: took 6.087361207s to createHost
	I1115 10:33:39.553928  344850 start.go:83] releasing machines lock for "no-preload-283677", held for 6.087507501s
	I1115 10:33:39.554034  344850 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-283677
	I1115 10:33:39.577282  344850 ssh_runner.go:195] Run: cat /version.json
	I1115 10:33:39.577354  344850 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:33:39.577384  344850 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:33:39.577456  344850 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:33:39.602583  344850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/no-preload-283677/id_rsa Username:docker}
	I1115 10:33:39.604342  344850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/no-preload-283677/id_rsa Username:docker}
	I1115 10:33:39.750916  344850 ssh_runner.go:195] Run: systemctl --version
	I1115 10:33:39.758775  344850 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:33:39.796259  344850 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:33:39.801099  344850 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:33:39.801159  344850 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:33:39.827111  344850 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1115 10:33:39.827133  344850 start.go:496] detecting cgroup driver to use...
	I1115 10:33:39.827169  344850 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:33:39.827219  344850 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:33:39.846730  344850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:33:39.859496  344850 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:33:39.859567  344850 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:33:39.877140  344850 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:33:39.898020  344850 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:33:39.991676  344850 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:33:40.123740  344850 docker.go:234] disabling docker service ...
	I1115 10:33:40.123806  344850 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:33:40.160083  344850 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:33:40.185875  344850 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:33:40.298820  344850 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:33:40.406317  344850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:33:40.422373  344850 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:33:40.445471  344850 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:33:40.445654  344850 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:33:40.460868  344850 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:33:40.460934  344850 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:33:40.472298  344850 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:33:40.483278  344850 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:33:40.494277  344850 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:33:40.503202  344850 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:33:40.512830  344850 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:33:40.527135  344850 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:33:40.537093  344850 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:33:40.545684  344850 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:33:40.554860  344850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:33:40.679780  344850 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:33:40.887634  344850 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:33:40.887776  344850 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:33:40.894018  344850 start.go:564] Will wait 60s for crictl version
	I1115 10:33:40.894088  344850 ssh_runner.go:195] Run: which crictl
	I1115 10:33:40.900768  344850 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:33:40.934001  344850 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:33:40.934109  344850 ssh_runner.go:195] Run: crio --version
	I1115 10:33:40.967249  344850 ssh_runner.go:195] Run: crio --version
	I1115 10:33:41.009864  344850 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1115 10:33:39.097599  329575 node_ready.go:57] node "flannel-931243" has "Ready":"False" status (will retry)
	W1115 10:33:41.099834  329575 node_ready.go:57] node "flannel-931243" has "Ready":"False" status (will retry)
	I1115 10:33:41.011152  344850 cli_runner.go:164] Run: docker network inspect no-preload-283677 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:33:41.031807  344850 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1115 10:33:41.037114  344850 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:33:41.048796  344850 kubeadm.go:884] updating cluster {Name:no-preload-283677 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-283677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:33:41.048910  344850 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:33:41.049000  344850 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:33:41.082693  344850 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1115 10:33:41.082726  344850 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1115 10:33:41.082784  344850 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:33:41.082800  344850 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1115 10:33:41.083012  344850 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 10:33:41.083044  344850 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1115 10:33:41.083044  344850 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1115 10:33:41.083191  344850 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1115 10:33:41.083217  344850 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1115 10:33:41.083266  344850 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1115 10:33:41.084497  344850 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 10:33:41.084518  344850 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1115 10:33:41.084904  344850 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1115 10:33:41.085013  344850 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1115 10:33:41.085098  344850 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1115 10:33:41.085192  344850 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1115 10:33:41.085390  344850 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:33:41.086043  344850 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1115 10:33:41.208233  344850 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1115 10:33:41.216559  344850 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1115 10:33:41.217248  344850 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1115 10:33:41.217590  344850 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 10:33:41.223482  344850 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1115 10:33:41.224404  344850 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1115 10:33:41.262637  344850 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1115 10:33:41.330050  344850 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1115 10:33:41.330114  344850 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1115 10:33:41.330164  344850 ssh_runner.go:195] Run: which crictl
	I1115 10:33:41.338624  344850 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1115 10:33:41.338677  344850 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1115 10:33:41.338729  344850 ssh_runner.go:195] Run: which crictl
	I1115 10:33:41.338836  344850 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1115 10:33:41.338878  344850 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1115 10:33:41.338915  344850 ssh_runner.go:195] Run: which crictl
	I1115 10:33:41.344591  344850 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1115 10:33:41.344634  344850 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 10:33:41.344678  344850 ssh_runner.go:195] Run: which crictl
	I1115 10:33:41.350241  344850 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1115 10:33:41.350283  344850 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1115 10:33:41.350322  344850 ssh_runner.go:195] Run: which crictl
	I1115 10:33:41.350382  344850 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1115 10:33:41.350463  344850 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1115 10:33:41.350503  344850 ssh_runner.go:195] Run: which crictl
	I1115 10:33:41.421235  344850 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1115 10:33:41.421294  344850 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1115 10:33:41.421346  344850 ssh_runner.go:195] Run: which crictl
	I1115 10:33:41.421355  344850 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1115 10:33:41.421391  344850 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1115 10:33:41.421454  344850 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 10:33:41.421459  344850 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1115 10:33:41.421500  344850 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1115 10:33:41.421524  344850 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1115 10:33:41.529812  344850 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1115 10:33:41.529925  344850 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1115 10:33:41.530001  344850 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1115 10:33:41.530019  344850 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 10:33:41.530050  344850 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1115 10:33:41.530053  344850 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1115 10:33:41.530077  344850 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1115 10:33:41.640535  344850 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1115 10:33:41.640657  344850 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 10:33:41.640743  344850 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1115 10:33:41.640838  344850 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1115 10:33:41.653784  344850 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1115 10:33:41.653927  344850 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1115 10:33:41.654061  344850 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1115 10:33:41.752504  344850 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1115 10:33:41.759547  344850 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1115 10:33:41.759594  344850 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1115 10:33:41.759656  344850 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1115 10:33:41.759680  344850 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1115 10:33:41.759547  344850 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1115 10:33:41.759787  344850 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1115 10:33:41.821912  344850 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1115 10:33:41.821992  344850 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1115 10:33:41.821912  344850 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1115 10:33:41.822041  344850 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1115 10:33:41.822087  344850 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1115 10:33:41.822094  344850 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1115 10:33:41.851218  344850 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1115 10:33:41.851262  344850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1115 10:33:41.851334  344850 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1115 10:33:41.851353  344850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1115 10:33:41.851405  344850 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1115 10:33:41.851421  344850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1115 10:33:41.851473  344850 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1115 10:33:41.851489  344850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1115 10:33:41.851519  344850 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1115 10:33:41.851535  344850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1115 10:33:41.851596  344850 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1115 10:33:41.851614  344850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1115 10:33:41.852219  344850 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1115 10:33:41.852313  344850 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1115 10:33:41.880737  344850 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1115 10:33:41.880774  344850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1115 10:33:41.987343  344850 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1115 10:33:41.987423  344850 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1115 10:33:42.897170  344850 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1115 10:33:42.897220  344850 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1115 10:33:42.897274  344850 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1115 10:33:40.003324  334883 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1115 10:33:40.012502  334883 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1115 10:33:40.027345  334883 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 10:33:40.027534  334883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:40.027671  334883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-931243 minikube.k8s.io/updated_at=2025_11_15T10_33_40_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510 minikube.k8s.io/name=bridge-931243 minikube.k8s.io/primary=true
	I1115 10:33:40.157564  334883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:40.157737  334883 ops.go:34] apiserver oom_adj: -16
	I1115 10:33:40.658588  334883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:41.158104  334883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:41.657653  334883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:42.158412  334883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:42.658607  334883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:43.158630  334883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:43.658074  334883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:44.158170  334883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:44.232206  334883 kubeadm.go:1114] duration metric: took 4.204819547s to wait for elevateKubeSystemPrivileges
	I1115 10:33:44.232248  334883 kubeadm.go:403] duration metric: took 16.988437006s to StartCluster
	I1115 10:33:44.232274  334883 settings.go:142] acquiring lock: {Name:mk94cfac0f6eef2479181468a7a8082a6e5c8f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:33:44.232356  334883 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:33:44.233502  334883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:33:44.233769  334883 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 10:33:44.233778  334883 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:33:44.233863  334883 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:33:44.233987  334883 addons.go:70] Setting storage-provisioner=true in profile "bridge-931243"
	I1115 10:33:44.234013  334883 addons.go:239] Setting addon storage-provisioner=true in "bridge-931243"
	I1115 10:33:44.234014  334883 config.go:182] Loaded profile config "bridge-931243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:33:44.234047  334883 host.go:66] Checking if "bridge-931243" exists ...
	I1115 10:33:44.234049  334883 addons.go:70] Setting default-storageclass=true in profile "bridge-931243"
	I1115 10:33:44.234090  334883 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "bridge-931243"
	I1115 10:33:44.234466  334883 cli_runner.go:164] Run: docker container inspect bridge-931243 --format={{.State.Status}}
	I1115 10:33:44.234634  334883 cli_runner.go:164] Run: docker container inspect bridge-931243 --format={{.State.Status}}
	I1115 10:33:44.239191  334883 out.go:179] * Verifying Kubernetes components...
	I1115 10:33:44.240801  334883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:33:44.259535  334883 addons.go:239] Setting addon default-storageclass=true in "bridge-931243"
	I1115 10:33:44.259605  334883 host.go:66] Checking if "bridge-931243" exists ...
	I1115 10:33:44.260123  334883 cli_runner.go:164] Run: docker container inspect bridge-931243 --format={{.State.Status}}
	I1115 10:33:44.260162  334883 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:33:45.040823  337023 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1115 10:33:45.041040  337023 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 10:33:45.041182  337023 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 10:33:45.041260  337023 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1115 10:33:45.041330  337023 kubeadm.go:319] OS: Linux
	I1115 10:33:45.041394  337023 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 10:33:45.041443  337023 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1115 10:33:45.041483  337023 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 10:33:45.041523  337023 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 10:33:45.041562  337023 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 10:33:45.041606  337023 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 10:33:45.041643  337023 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 10:33:45.041682  337023 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 10:33:45.041719  337023 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1115 10:33:45.041777  337023 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 10:33:45.041853  337023 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 10:33:45.041993  337023 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1115 10:33:45.042091  337023 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1115 10:33:45.045112  337023 out.go:252]   - Generating certificates and keys ...
	I1115 10:33:45.045250  337023 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 10:33:45.045334  337023 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 10:33:45.045457  337023 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 10:33:45.045558  337023 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 10:33:45.045653  337023 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 10:33:45.045712  337023 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 10:33:45.045776  337023 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 10:33:45.045932  337023 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-087235] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1115 10:33:45.046028  337023 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 10:33:45.046222  337023 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-087235] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1115 10:33:45.046338  337023 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 10:33:45.046461  337023 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 10:33:45.046526  337023 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 10:33:45.046597  337023 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 10:33:45.046663  337023 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 10:33:45.046732  337023 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 10:33:45.046813  337023 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 10:33:45.046883  337023 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 10:33:45.046989  337023 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 10:33:45.047072  337023 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1115 10:33:45.048555  337023 out.go:252]   - Booting up control plane ...
	I1115 10:33:45.048665  337023 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 10:33:45.048762  337023 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 10:33:45.048845  337023 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 10:33:45.048982  337023 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 10:33:45.049093  337023 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 10:33:45.049146  337023 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 10:33:45.049332  337023 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1115 10:33:45.049439  337023 kubeadm.go:319] [apiclient] All control plane components are healthy after 5.502917 seconds
	I1115 10:33:45.049579  337023 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 10:33:45.049731  337023 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 10:33:45.049803  337023 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 10:33:45.050041  337023 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-087235 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 10:33:45.050119  337023 kubeadm.go:319] [bootstrap-token] Using token: xz92yx.t9ply6v769ro6lrh
	I1115 10:33:45.053489  337023 out.go:252]   - Configuring RBAC rules ...
	I1115 10:33:45.053621  337023 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 10:33:45.053772  337023 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 10:33:45.053991  337023 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 10:33:45.054175  337023 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 10:33:45.054330  337023 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 10:33:45.054463  337023 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 10:33:45.054626  337023 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 10:33:45.054691  337023 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 10:33:45.054760  337023 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 10:33:45.054770  337023 kubeadm.go:319] 
	I1115 10:33:45.054842  337023 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 10:33:45.054854  337023 kubeadm.go:319] 
	I1115 10:33:45.054940  337023 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 10:33:45.054968  337023 kubeadm.go:319] 
	I1115 10:33:45.055001  337023 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 10:33:45.055092  337023 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 10:33:45.055160  337023 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 10:33:45.055171  337023 kubeadm.go:319] 
	I1115 10:33:45.055237  337023 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 10:33:45.055248  337023 kubeadm.go:319] 
	I1115 10:33:45.055305  337023 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 10:33:45.055316  337023 kubeadm.go:319] 
	I1115 10:33:45.055379  337023 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 10:33:45.055473  337023 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 10:33:45.055559  337023 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 10:33:45.055572  337023 kubeadm.go:319] 
	I1115 10:33:45.055678  337023 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 10:33:45.055779  337023 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 10:33:45.055790  337023 kubeadm.go:319] 
	I1115 10:33:45.055887  337023 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token xz92yx.t9ply6v769ro6lrh \
	I1115 10:33:45.056037  337023 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c958101d3a67868123c5e439a6c1ea6e30a99a6538b76cbe461e2c919cf1aec \
	I1115 10:33:45.056096  337023 kubeadm.go:319] 	--control-plane 
	I1115 10:33:45.056114  337023 kubeadm.go:319] 
	I1115 10:33:45.056259  337023 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 10:33:45.056276  337023 kubeadm.go:319] 
	I1115 10:33:45.056423  337023 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token xz92yx.t9ply6v769ro6lrh \
	I1115 10:33:45.056575  337023 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c958101d3a67868123c5e439a6c1ea6e30a99a6538b76cbe461e2c919cf1aec 
	I1115 10:33:45.056597  337023 cni.go:84] Creating CNI manager for ""
	I1115 10:33:45.056606  337023 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:33:45.058159  337023 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1115 10:33:44.261416  334883 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:33:44.261438  334883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:33:44.261498  334883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-931243
	I1115 10:33:44.295020  334883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/bridge-931243/id_rsa Username:docker}
	I1115 10:33:44.295600  334883 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:33:44.295622  334883 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:33:44.295669  334883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-931243
	I1115 10:33:44.315434  334883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/bridge-931243/id_rsa Username:docker}
	I1115 10:33:44.372399  334883 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 10:33:44.489684  334883 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:33:44.542627  334883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:33:44.557754  334883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:33:44.893058  334883 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1115 10:33:44.895139  334883 node_ready.go:35] waiting up to 15m0s for node "bridge-931243" to be "Ready" ...
	I1115 10:33:44.951096  334883 node_ready.go:49] node "bridge-931243" is "Ready"
	I1115 10:33:44.951129  334883 node_ready.go:38] duration metric: took 55.95419ms for node "bridge-931243" to be "Ready" ...
	I1115 10:33:44.951146  334883 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:33:44.951209  334883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:33:45.389514  334883 api_server.go:72] duration metric: took 1.155698994s to wait for apiserver process to appear ...
	I1115 10:33:45.389548  334883 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:33:45.389572  334883 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1115 10:33:45.447279  334883 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1115 10:33:45.449153  334883 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-931243" context rescaled to 1 replicas
	I1115 10:33:45.449311  334883 api_server.go:141] control plane version: v1.34.1
	I1115 10:33:45.449335  334883 api_server.go:131] duration metric: took 59.779074ms to wait for apiserver health ...
	I1115 10:33:45.449346  334883 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:33:45.453645  334883 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1115 10:33:43.597083  329575 node_ready.go:57] node "flannel-931243" has "Ready":"False" status (will retry)
	I1115 10:33:44.096293  329575 node_ready.go:49] node "flannel-931243" is "Ready"
	I1115 10:33:44.096323  329575 node_ready.go:38] duration metric: took 7.002368687s for node "flannel-931243" to be "Ready" ...
	I1115 10:33:44.096342  329575 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:33:44.096399  329575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:33:44.109997  329575 api_server.go:72] duration metric: took 8.028238805s to wait for apiserver process to appear ...
	I1115 10:33:44.110026  329575 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:33:44.110062  329575 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1115 10:33:44.115131  329575 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1115 10:33:44.116012  329575 api_server.go:141] control plane version: v1.34.1
	I1115 10:33:44.116038  329575 api_server.go:131] duration metric: took 6.004766ms to wait for apiserver health ...
	I1115 10:33:44.116047  329575 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:33:44.119667  329575 system_pods.go:59] 7 kube-system pods found
	I1115 10:33:44.119702  329575 system_pods.go:61] "coredns-66bc5c9577-6pm45" [7eb8a360-8473-4fcb-89a2-67de63a35600] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:33:44.119714  329575 system_pods.go:61] "etcd-flannel-931243" [a9f9a132-fc83-4383-821b-96e5947b6895] Running
	I1115 10:33:44.119722  329575 system_pods.go:61] "kube-apiserver-flannel-931243" [45948078-6636-43a8-956a-139f2b1e59aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:33:44.119726  329575 system_pods.go:61] "kube-controller-manager-flannel-931243" [2af99053-5d83-40bf-8a77-b51ecc9e635c] Running
	I1115 10:33:44.119730  329575 system_pods.go:61] "kube-proxy-4mgxz" [248f13bc-f380-4d20-8c0f-e29b7f2f5c55] Running
	I1115 10:33:44.119734  329575 system_pods.go:61] "kube-scheduler-flannel-931243" [60f24337-f796-45e4-a994-06acc2acf132] Running
	I1115 10:33:44.119738  329575 system_pods.go:61] "storage-provisioner" [0648193c-e75a-419f-a635-5b237fae43c0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:33:44.119747  329575 system_pods.go:74] duration metric: took 3.694407ms to wait for pod list to return data ...
	I1115 10:33:44.119755  329575 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:33:44.122299  329575 default_sa.go:45] found service account: "default"
	I1115 10:33:44.122321  329575 default_sa.go:55] duration metric: took 2.558425ms for default service account to be created ...
	I1115 10:33:44.122330  329575 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:33:44.125426  329575 system_pods.go:86] 7 kube-system pods found
	I1115 10:33:44.125527  329575 system_pods.go:89] "coredns-66bc5c9577-6pm45" [7eb8a360-8473-4fcb-89a2-67de63a35600] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:33:44.125538  329575 system_pods.go:89] "etcd-flannel-931243" [a9f9a132-fc83-4383-821b-96e5947b6895] Running
	I1115 10:33:44.125551  329575 system_pods.go:89] "kube-apiserver-flannel-931243" [45948078-6636-43a8-956a-139f2b1e59aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:33:44.125557  329575 system_pods.go:89] "kube-controller-manager-flannel-931243" [2af99053-5d83-40bf-8a77-b51ecc9e635c] Running
	I1115 10:33:44.125563  329575 system_pods.go:89] "kube-proxy-4mgxz" [248f13bc-f380-4d20-8c0f-e29b7f2f5c55] Running
	I1115 10:33:44.125569  329575 system_pods.go:89] "kube-scheduler-flannel-931243" [60f24337-f796-45e4-a994-06acc2acf132] Running
	I1115 10:33:44.125584  329575 system_pods.go:89] "storage-provisioner" [0648193c-e75a-419f-a635-5b237fae43c0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:33:44.125618  329575 retry.go:31] will retry after 271.809522ms: missing components: kube-dns
	I1115 10:33:44.401909  329575 system_pods.go:86] 7 kube-system pods found
	I1115 10:33:44.401945  329575 system_pods.go:89] "coredns-66bc5c9577-6pm45" [7eb8a360-8473-4fcb-89a2-67de63a35600] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:33:44.401977  329575 system_pods.go:89] "etcd-flannel-931243" [a9f9a132-fc83-4383-821b-96e5947b6895] Running
	I1115 10:33:44.401990  329575 system_pods.go:89] "kube-apiserver-flannel-931243" [45948078-6636-43a8-956a-139f2b1e59aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:33:44.401995  329575 system_pods.go:89] "kube-controller-manager-flannel-931243" [2af99053-5d83-40bf-8a77-b51ecc9e635c] Running
	I1115 10:33:44.402001  329575 system_pods.go:89] "kube-proxy-4mgxz" [248f13bc-f380-4d20-8c0f-e29b7f2f5c55] Running
	I1115 10:33:44.402006  329575 system_pods.go:89] "kube-scheduler-flannel-931243" [60f24337-f796-45e4-a994-06acc2acf132] Running
	I1115 10:33:44.402013  329575 system_pods.go:89] "storage-provisioner" [0648193c-e75a-419f-a635-5b237fae43c0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:33:44.402032  329575 retry.go:31] will retry after 248.630545ms: missing components: kube-dns
	I1115 10:33:44.660701  329575 system_pods.go:86] 7 kube-system pods found
	I1115 10:33:44.660744  329575 system_pods.go:89] "coredns-66bc5c9577-6pm45" [7eb8a360-8473-4fcb-89a2-67de63a35600] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:33:44.660751  329575 system_pods.go:89] "etcd-flannel-931243" [a9f9a132-fc83-4383-821b-96e5947b6895] Running
	I1115 10:33:44.660760  329575 system_pods.go:89] "kube-apiserver-flannel-931243" [45948078-6636-43a8-956a-139f2b1e59aa] Running
	I1115 10:33:44.660773  329575 system_pods.go:89] "kube-controller-manager-flannel-931243" [2af99053-5d83-40bf-8a77-b51ecc9e635c] Running
	I1115 10:33:44.660778  329575 system_pods.go:89] "kube-proxy-4mgxz" [248f13bc-f380-4d20-8c0f-e29b7f2f5c55] Running
	I1115 10:33:44.660783  329575 system_pods.go:89] "kube-scheduler-flannel-931243" [60f24337-f796-45e4-a994-06acc2acf132] Running
	I1115 10:33:44.660792  329575 system_pods.go:89] "storage-provisioner" [0648193c-e75a-419f-a635-5b237fae43c0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:33:44.660810  329575 retry.go:31] will retry after 307.420479ms: missing components: kube-dns
	I1115 10:33:44.975136  329575 system_pods.go:86] 7 kube-system pods found
	I1115 10:33:44.975179  329575 system_pods.go:89] "coredns-66bc5c9577-6pm45" [7eb8a360-8473-4fcb-89a2-67de63a35600] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:33:44.975188  329575 system_pods.go:89] "etcd-flannel-931243" [a9f9a132-fc83-4383-821b-96e5947b6895] Running
	I1115 10:33:44.975198  329575 system_pods.go:89] "kube-apiserver-flannel-931243" [45948078-6636-43a8-956a-139f2b1e59aa] Running
	I1115 10:33:44.975204  329575 system_pods.go:89] "kube-controller-manager-flannel-931243" [2af99053-5d83-40bf-8a77-b51ecc9e635c] Running
	I1115 10:33:44.975209  329575 system_pods.go:89] "kube-proxy-4mgxz" [248f13bc-f380-4d20-8c0f-e29b7f2f5c55] Running
	I1115 10:33:44.975215  329575 system_pods.go:89] "kube-scheduler-flannel-931243" [60f24337-f796-45e4-a994-06acc2acf132] Running
	I1115 10:33:44.975220  329575 system_pods.go:89] "storage-provisioner" [0648193c-e75a-419f-a635-5b237fae43c0] Running
	I1115 10:33:44.975240  329575 retry.go:31] will retry after 589.273898ms: missing components: kube-dns
	I1115 10:33:45.571185  329575 system_pods.go:86] 7 kube-system pods found
	I1115 10:33:45.571234  329575 system_pods.go:89] "coredns-66bc5c9577-6pm45" [7eb8a360-8473-4fcb-89a2-67de63a35600] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:33:45.571248  329575 system_pods.go:89] "etcd-flannel-931243" [a9f9a132-fc83-4383-821b-96e5947b6895] Running
	I1115 10:33:45.571258  329575 system_pods.go:89] "kube-apiserver-flannel-931243" [45948078-6636-43a8-956a-139f2b1e59aa] Running
	I1115 10:33:45.571263  329575 system_pods.go:89] "kube-controller-manager-flannel-931243" [2af99053-5d83-40bf-8a77-b51ecc9e635c] Running
	I1115 10:33:45.571273  329575 system_pods.go:89] "kube-proxy-4mgxz" [248f13bc-f380-4d20-8c0f-e29b7f2f5c55] Running
	I1115 10:33:45.571278  329575 system_pods.go:89] "kube-scheduler-flannel-931243" [60f24337-f796-45e4-a994-06acc2acf132] Running
	I1115 10:33:45.571283  329575 system_pods.go:89] "storage-provisioner" [0648193c-e75a-419f-a635-5b237fae43c0] Running
	I1115 10:33:45.571319  329575 retry.go:31] will retry after 691.608258ms: missing components: kube-dns
	I1115 10:33:46.267819  329575 system_pods.go:86] 7 kube-system pods found
	I1115 10:33:46.267857  329575 system_pods.go:89] "coredns-66bc5c9577-6pm45" [7eb8a360-8473-4fcb-89a2-67de63a35600] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:33:46.267864  329575 system_pods.go:89] "etcd-flannel-931243" [a9f9a132-fc83-4383-821b-96e5947b6895] Running
	I1115 10:33:46.267876  329575 system_pods.go:89] "kube-apiserver-flannel-931243" [45948078-6636-43a8-956a-139f2b1e59aa] Running
	I1115 10:33:46.267882  329575 system_pods.go:89] "kube-controller-manager-flannel-931243" [2af99053-5d83-40bf-8a77-b51ecc9e635c] Running
	I1115 10:33:46.267887  329575 system_pods.go:89] "kube-proxy-4mgxz" [248f13bc-f380-4d20-8c0f-e29b7f2f5c55] Running
	I1115 10:33:46.267893  329575 system_pods.go:89] "kube-scheduler-flannel-931243" [60f24337-f796-45e4-a994-06acc2acf132] Running
	I1115 10:33:46.267898  329575 system_pods.go:89] "storage-provisioner" [0648193c-e75a-419f-a635-5b237fae43c0] Running
	I1115 10:33:46.267920  329575 retry.go:31] will retry after 628.203823ms: missing components: kube-dns
	I1115 10:33:46.901193  329575 system_pods.go:86] 7 kube-system pods found
	I1115 10:33:46.901245  329575 system_pods.go:89] "coredns-66bc5c9577-6pm45" [7eb8a360-8473-4fcb-89a2-67de63a35600] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:33:46.901254  329575 system_pods.go:89] "etcd-flannel-931243" [a9f9a132-fc83-4383-821b-96e5947b6895] Running
	I1115 10:33:46.901265  329575 system_pods.go:89] "kube-apiserver-flannel-931243" [45948078-6636-43a8-956a-139f2b1e59aa] Running
	I1115 10:33:46.901272  329575 system_pods.go:89] "kube-controller-manager-flannel-931243" [2af99053-5d83-40bf-8a77-b51ecc9e635c] Running
	I1115 10:33:46.901278  329575 system_pods.go:89] "kube-proxy-4mgxz" [248f13bc-f380-4d20-8c0f-e29b7f2f5c55] Running
	I1115 10:33:46.901289  329575 system_pods.go:89] "kube-scheduler-flannel-931243" [60f24337-f796-45e4-a994-06acc2acf132] Running
	I1115 10:33:46.901294  329575 system_pods.go:89] "storage-provisioner" [0648193c-e75a-419f-a635-5b237fae43c0] Running
	I1115 10:33:46.901313  329575 retry.go:31] will retry after 921.208557ms: missing components: kube-dns
	I1115 10:33:47.825942  329575 system_pods.go:86] 7 kube-system pods found
	I1115 10:33:47.826075  329575 system_pods.go:89] "coredns-66bc5c9577-6pm45" [7eb8a360-8473-4fcb-89a2-67de63a35600] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:33:47.826085  329575 system_pods.go:89] "etcd-flannel-931243" [a9f9a132-fc83-4383-821b-96e5947b6895] Running
	I1115 10:33:47.826094  329575 system_pods.go:89] "kube-apiserver-flannel-931243" [45948078-6636-43a8-956a-139f2b1e59aa] Running
	I1115 10:33:47.826100  329575 system_pods.go:89] "kube-controller-manager-flannel-931243" [2af99053-5d83-40bf-8a77-b51ecc9e635c] Running
	I1115 10:33:47.826110  329575 system_pods.go:89] "kube-proxy-4mgxz" [248f13bc-f380-4d20-8c0f-e29b7f2f5c55] Running
	I1115 10:33:47.826116  329575 system_pods.go:89] "kube-scheduler-flannel-931243" [60f24337-f796-45e4-a994-06acc2acf132] Running
	I1115 10:33:47.826121  329575 system_pods.go:89] "storage-provisioner" [0648193c-e75a-419f-a635-5b237fae43c0] Running
	I1115 10:33:47.826142  329575 retry.go:31] will retry after 1.046627852s: missing components: kube-dns
	I1115 10:33:45.059750  337023 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 10:33:45.067151  337023 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1115 10:33:45.067174  337023 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 10:33:45.098102  337023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 10:33:45.998231  337023 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 10:33:45.998386  337023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-087235 minikube.k8s.io/updated_at=2025_11_15T10_33_45_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510 minikube.k8s.io/name=old-k8s-version-087235 minikube.k8s.io/primary=true
	I1115 10:33:45.998557  337023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:46.037293  337023 ops.go:34] apiserver oom_adj: -16
	I1115 10:33:46.175469  337023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:46.676211  337023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:47.175777  337023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:47.675972  337023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:48.176175  337023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:43.718580  344850 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:33:44.433610  344850 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.536305651s)
	I1115 10:33:44.433645  344850 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1115 10:33:44.433674  344850 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1115 10:33:44.433712  344850 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1115 10:33:44.433732  344850 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1115 10:33:44.433752  344850 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:33:44.433787  344850 ssh_runner.go:195] Run: which crictl
	I1115 10:33:46.280132  344850 ssh_runner.go:235] Completed: which crictl: (1.846320961s)
	I1115 10:33:46.280169  344850 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.846415708s)
	I1115 10:33:46.280190  344850 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:33:46.280195  344850 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1115 10:33:46.280231  344850 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1115 10:33:46.280303  344850 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1115 10:33:46.309060  344850 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:33:47.461676  344850 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.181347357s)
	I1115 10:33:47.461708  344850 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1115 10:33:47.461727  344850 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1115 10:33:47.461728  344850 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.152632012s)
	I1115 10:33:47.461773  344850 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1115 10:33:47.461806  344850 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:33:45.455820  334883 addons.go:515] duration metric: took 1.22195414s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1115 10:33:45.459489  334883 system_pods.go:59] 8 kube-system pods found
	I1115 10:33:45.459542  334883 system_pods.go:61] "coredns-66bc5c9577-m9gt5" [49abaf2c-ce41-4da4-9bac-ac5bd4f74293] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:33:45.459560  334883 system_pods.go:61] "coredns-66bc5c9577-xzqds" [afce7cb3-6533-4f02-bb7a-89303dfed8d8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:33:45.459569  334883 system_pods.go:61] "etcd-bridge-931243" [247c2899-e191-43eb-961d-9fd87a7b61a3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:33:45.459582  334883 system_pods.go:61] "kube-apiserver-bridge-931243" [db50b2a9-2b55-4c17-bdfb-71c745c9c79d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:33:45.459588  334883 system_pods.go:61] "kube-controller-manager-bridge-931243" [4d06abd4-e52b-4684-8994-0806dec4e819] Running
	I1115 10:33:45.459600  334883 system_pods.go:61] "kube-proxy-66f22" [4b099a06-9569-4e25-93e7-d7f0c8753f04] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1115 10:33:45.459611  334883 system_pods.go:61] "kube-scheduler-bridge-931243" [aaf9d2e0-5d0f-4d8b-9a63-e3e727180464] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:33:45.459617  334883 system_pods.go:61] "storage-provisioner" [397b10aa-58fb-45e6-b89c-0cd339cd9725] Pending
	I1115 10:33:45.459634  334883 system_pods.go:74] duration metric: took 10.279813ms to wait for pod list to return data ...
	I1115 10:33:45.459649  334883 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:33:45.467493  334883 default_sa.go:45] found service account: "default"
	I1115 10:33:45.467532  334883 default_sa.go:55] duration metric: took 7.874775ms for default service account to be created ...
	I1115 10:33:45.467543  334883 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:33:45.478355  334883 system_pods.go:86] 8 kube-system pods found
	I1115 10:33:45.479984  334883 system_pods.go:89] "coredns-66bc5c9577-m9gt5" [49abaf2c-ce41-4da4-9bac-ac5bd4f74293] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:33:45.480018  334883 system_pods.go:89] "coredns-66bc5c9577-xzqds" [afce7cb3-6533-4f02-bb7a-89303dfed8d8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:33:45.480058  334883 system_pods.go:89] "etcd-bridge-931243" [247c2899-e191-43eb-961d-9fd87a7b61a3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:33:45.480069  334883 system_pods.go:89] "kube-apiserver-bridge-931243" [db50b2a9-2b55-4c17-bdfb-71c745c9c79d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:33:45.480080  334883 system_pods.go:89] "kube-controller-manager-bridge-931243" [4d06abd4-e52b-4684-8994-0806dec4e819] Running
	I1115 10:33:45.480091  334883 system_pods.go:89] "kube-proxy-66f22" [4b099a06-9569-4e25-93e7-d7f0c8753f04] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1115 10:33:45.480103  334883 system_pods.go:89] "kube-scheduler-bridge-931243" [aaf9d2e0-5d0f-4d8b-9a63-e3e727180464] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:33:45.480113  334883 system_pods.go:89] "storage-provisioner" [397b10aa-58fb-45e6-b89c-0cd339cd9725] Pending
	I1115 10:33:45.480165  334883 retry.go:31] will retry after 284.804903ms: missing components: kube-dns, kube-proxy
	I1115 10:33:45.772375  334883 system_pods.go:86] 8 kube-system pods found
	I1115 10:33:45.772421  334883 system_pods.go:89] "coredns-66bc5c9577-m9gt5" [49abaf2c-ce41-4da4-9bac-ac5bd4f74293] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:33:45.772441  334883 system_pods.go:89] "coredns-66bc5c9577-xzqds" [afce7cb3-6533-4f02-bb7a-89303dfed8d8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:33:45.772448  334883 system_pods.go:89] "etcd-bridge-931243" [247c2899-e191-43eb-961d-9fd87a7b61a3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:33:45.772457  334883 system_pods.go:89] "kube-apiserver-bridge-931243" [db50b2a9-2b55-4c17-bdfb-71c745c9c79d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:33:45.772464  334883 system_pods.go:89] "kube-controller-manager-bridge-931243" [4d06abd4-e52b-4684-8994-0806dec4e819] Running
	I1115 10:33:45.772473  334883 system_pods.go:89] "kube-proxy-66f22" [4b099a06-9569-4e25-93e7-d7f0c8753f04] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1115 10:33:45.772481  334883 system_pods.go:89] "kube-scheduler-bridge-931243" [aaf9d2e0-5d0f-4d8b-9a63-e3e727180464] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:33:45.772489  334883 system_pods.go:89] "storage-provisioner" [397b10aa-58fb-45e6-b89c-0cd339cd9725] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:33:45.772509  334883 retry.go:31] will retry after 383.002883ms: missing components: kube-dns, kube-proxy
	I1115 10:33:46.160612  334883 system_pods.go:86] 8 kube-system pods found
	I1115 10:33:46.160655  334883 system_pods.go:89] "coredns-66bc5c9577-m9gt5" [49abaf2c-ce41-4da4-9bac-ac5bd4f74293] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:33:46.160665  334883 system_pods.go:89] "coredns-66bc5c9577-xzqds" [afce7cb3-6533-4f02-bb7a-89303dfed8d8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:33:46.160675  334883 system_pods.go:89] "etcd-bridge-931243" [247c2899-e191-43eb-961d-9fd87a7b61a3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:33:46.160684  334883 system_pods.go:89] "kube-apiserver-bridge-931243" [db50b2a9-2b55-4c17-bdfb-71c745c9c79d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:33:46.160691  334883 system_pods.go:89] "kube-controller-manager-bridge-931243" [4d06abd4-e52b-4684-8994-0806dec4e819] Running
	I1115 10:33:46.160699  334883 system_pods.go:89] "kube-proxy-66f22" [4b099a06-9569-4e25-93e7-d7f0c8753f04] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1115 10:33:46.160706  334883 system_pods.go:89] "kube-scheduler-bridge-931243" [aaf9d2e0-5d0f-4d8b-9a63-e3e727180464] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:33:46.160720  334883 system_pods.go:89] "storage-provisioner" [397b10aa-58fb-45e6-b89c-0cd339cd9725] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:33:46.160743  334883 retry.go:31] will retry after 455.337129ms: missing components: kube-dns, kube-proxy
	I1115 10:33:46.619741  334883 system_pods.go:86] 7 kube-system pods found
	I1115 10:33:46.619775  334883 system_pods.go:89] "coredns-66bc5c9577-xzqds" [afce7cb3-6533-4f02-bb7a-89303dfed8d8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:33:46.619784  334883 system_pods.go:89] "etcd-bridge-931243" [247c2899-e191-43eb-961d-9fd87a7b61a3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:33:46.619792  334883 system_pods.go:89] "kube-apiserver-bridge-931243" [db50b2a9-2b55-4c17-bdfb-71c745c9c79d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:33:46.619797  334883 system_pods.go:89] "kube-controller-manager-bridge-931243" [4d06abd4-e52b-4684-8994-0806dec4e819] Running
	I1115 10:33:46.619801  334883 system_pods.go:89] "kube-proxy-66f22" [4b099a06-9569-4e25-93e7-d7f0c8753f04] Running
	I1115 10:33:46.619805  334883 system_pods.go:89] "kube-scheduler-bridge-931243" [aaf9d2e0-5d0f-4d8b-9a63-e3e727180464] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:33:46.619809  334883 system_pods.go:89] "storage-provisioner" [397b10aa-58fb-45e6-b89c-0cd339cd9725] Running
	I1115 10:33:46.619817  334883 system_pods.go:126] duration metric: took 1.152268198s to wait for k8s-apps to be running ...
	I1115 10:33:46.619823  334883 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:33:46.619872  334883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:33:46.633503  334883 system_svc.go:56] duration metric: took 13.665491ms WaitForService to wait for kubelet
	I1115 10:33:46.633548  334883 kubeadm.go:587] duration metric: took 2.399735894s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:33:46.633573  334883 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:33:46.636459  334883 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:33:46.636490  334883 node_conditions.go:123] node cpu capacity is 8
	I1115 10:33:46.636506  334883 node_conditions.go:105] duration metric: took 2.924983ms to run NodePressure ...
	I1115 10:33:46.636523  334883 start.go:242] waiting for startup goroutines ...
	I1115 10:33:46.636533  334883 start.go:247] waiting for cluster config update ...
	I1115 10:33:46.636550  334883 start.go:256] writing updated cluster config ...
	I1115 10:33:46.636887  334883 ssh_runner.go:195] Run: rm -f paused
	I1115 10:33:46.641219  334883 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:33:46.644413  334883 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xzqds" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:33:48.877748  329575 system_pods.go:86] 7 kube-system pods found
	I1115 10:33:48.877794  329575 system_pods.go:89] "coredns-66bc5c9577-6pm45" [7eb8a360-8473-4fcb-89a2-67de63a35600] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:33:48.877802  329575 system_pods.go:89] "etcd-flannel-931243" [a9f9a132-fc83-4383-821b-96e5947b6895] Running
	I1115 10:33:48.877809  329575 system_pods.go:89] "kube-apiserver-flannel-931243" [45948078-6636-43a8-956a-139f2b1e59aa] Running
	I1115 10:33:48.877815  329575 system_pods.go:89] "kube-controller-manager-flannel-931243" [2af99053-5d83-40bf-8a77-b51ecc9e635c] Running
	I1115 10:33:48.877821  329575 system_pods.go:89] "kube-proxy-4mgxz" [248f13bc-f380-4d20-8c0f-e29b7f2f5c55] Running
	I1115 10:33:48.877824  329575 system_pods.go:89] "kube-scheduler-flannel-931243" [60f24337-f796-45e4-a994-06acc2acf132] Running
	I1115 10:33:48.877827  329575 system_pods.go:89] "storage-provisioner" [0648193c-e75a-419f-a635-5b237fae43c0] Running
	I1115 10:33:48.877848  329575 retry.go:31] will retry after 1.351064049s: missing components: kube-dns
	I1115 10:33:50.236316  329575 system_pods.go:86] 7 kube-system pods found
	I1115 10:33:50.236361  329575 system_pods.go:89] "coredns-66bc5c9577-6pm45" [7eb8a360-8473-4fcb-89a2-67de63a35600] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:33:50.236374  329575 system_pods.go:89] "etcd-flannel-931243" [a9f9a132-fc83-4383-821b-96e5947b6895] Running
	I1115 10:33:50.236383  329575 system_pods.go:89] "kube-apiserver-flannel-931243" [45948078-6636-43a8-956a-139f2b1e59aa] Running
	I1115 10:33:50.236389  329575 system_pods.go:89] "kube-controller-manager-flannel-931243" [2af99053-5d83-40bf-8a77-b51ecc9e635c] Running
	I1115 10:33:50.236395  329575 system_pods.go:89] "kube-proxy-4mgxz" [248f13bc-f380-4d20-8c0f-e29b7f2f5c55] Running
	I1115 10:33:50.236411  329575 system_pods.go:89] "kube-scheduler-flannel-931243" [60f24337-f796-45e4-a994-06acc2acf132] Running
	I1115 10:33:50.236417  329575 system_pods.go:89] "storage-provisioner" [0648193c-e75a-419f-a635-5b237fae43c0] Running
	I1115 10:33:50.236445  329575 retry.go:31] will retry after 2.20285986s: missing components: kube-dns
	I1115 10:33:52.445802  329575 system_pods.go:86] 7 kube-system pods found
	I1115 10:33:52.445847  329575 system_pods.go:89] "coredns-66bc5c9577-6pm45" [7eb8a360-8473-4fcb-89a2-67de63a35600] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:33:52.445856  329575 system_pods.go:89] "etcd-flannel-931243" [a9f9a132-fc83-4383-821b-96e5947b6895] Running
	I1115 10:33:52.445864  329575 system_pods.go:89] "kube-apiserver-flannel-931243" [45948078-6636-43a8-956a-139f2b1e59aa] Running
	I1115 10:33:52.445870  329575 system_pods.go:89] "kube-controller-manager-flannel-931243" [2af99053-5d83-40bf-8a77-b51ecc9e635c] Running
	I1115 10:33:52.445877  329575 system_pods.go:89] "kube-proxy-4mgxz" [248f13bc-f380-4d20-8c0f-e29b7f2f5c55] Running
	I1115 10:33:52.445882  329575 system_pods.go:89] "kube-scheduler-flannel-931243" [60f24337-f796-45e4-a994-06acc2acf132] Running
	I1115 10:33:52.445887  329575 system_pods.go:89] "storage-provisioner" [0648193c-e75a-419f-a635-5b237fae43c0] Running
	I1115 10:33:52.445921  329575 retry.go:31] will retry after 2.74290338s: missing components: kube-dns
	I1115 10:33:48.676176  337023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:49.175647  337023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:49.676214  337023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:50.176544  337023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:50.676327  337023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:51.176194  337023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:51.676508  337023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:52.176408  337023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:52.676463  337023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:53.175575  337023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:48.823838  344850 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.362038051s)
	I1115 10:33:48.823869  344850 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1115 10:33:48.823892  344850 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1115 10:33:48.823895  344850 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.362064778s)
	I1115 10:33:48.823935  344850 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1115 10:33:48.823942  344850 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1115 10:33:48.824022  344850 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1115 10:33:50.230217  344850 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.40624342s)
	I1115 10:33:50.230253  344850 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1115 10:33:50.230279  344850 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1115 10:33:50.230283  344850 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.406227844s)
	I1115 10:33:50.230320  344850 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1115 10:33:50.230341  344850 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1115 10:33:50.230348  344850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	W1115 10:33:48.650325  334883 pod_ready.go:104] pod "coredns-66bc5c9577-xzqds" is not "Ready", error: <nil>
	W1115 10:33:50.650472  334883 pod_ready.go:104] pod "coredns-66bc5c9577-xzqds" is not "Ready", error: <nil>
	W1115 10:33:53.150645  334883 pod_ready.go:104] pod "coredns-66bc5c9577-xzqds" is not "Ready", error: <nil>
	I1115 10:33:55.194985  329575 system_pods.go:86] 7 kube-system pods found
	I1115 10:33:55.195019  329575 system_pods.go:89] "coredns-66bc5c9577-6pm45" [7eb8a360-8473-4fcb-89a2-67de63a35600] Running
	I1115 10:33:55.195028  329575 system_pods.go:89] "etcd-flannel-931243" [a9f9a132-fc83-4383-821b-96e5947b6895] Running
	I1115 10:33:55.195034  329575 system_pods.go:89] "kube-apiserver-flannel-931243" [45948078-6636-43a8-956a-139f2b1e59aa] Running
	I1115 10:33:55.195039  329575 system_pods.go:89] "kube-controller-manager-flannel-931243" [2af99053-5d83-40bf-8a77-b51ecc9e635c] Running
	I1115 10:33:55.195044  329575 system_pods.go:89] "kube-proxy-4mgxz" [248f13bc-f380-4d20-8c0f-e29b7f2f5c55] Running
	I1115 10:33:55.195050  329575 system_pods.go:89] "kube-scheduler-flannel-931243" [60f24337-f796-45e4-a994-06acc2acf132] Running
	I1115 10:33:55.195056  329575 system_pods.go:89] "storage-provisioner" [0648193c-e75a-419f-a635-5b237fae43c0] Running
	I1115 10:33:55.195076  329575 system_pods.go:126] duration metric: took 11.072737245s to wait for k8s-apps to be running ...
	I1115 10:33:55.195089  329575 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:33:55.195149  329575 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:33:55.212437  329575 system_svc.go:56] duration metric: took 17.337576ms WaitForService to wait for kubelet
	I1115 10:33:55.212473  329575 kubeadm.go:587] duration metric: took 19.13072151s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:33:55.212498  329575 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:33:55.216101  329575 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:33:55.216139  329575 node_conditions.go:123] node cpu capacity is 8
	I1115 10:33:55.216159  329575 node_conditions.go:105] duration metric: took 3.653911ms to run NodePressure ...
	I1115 10:33:55.216175  329575 start.go:242] waiting for startup goroutines ...
	I1115 10:33:55.216186  329575 start.go:247] waiting for cluster config update ...
	I1115 10:33:55.216204  329575 start.go:256] writing updated cluster config ...
	I1115 10:33:55.216563  329575 ssh_runner.go:195] Run: rm -f paused
	I1115 10:33:55.221283  329575 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:33:55.225111  329575 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6pm45" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:33:55.230884  329575 pod_ready.go:94] pod "coredns-66bc5c9577-6pm45" is "Ready"
	I1115 10:33:55.230912  329575 pod_ready.go:86] duration metric: took 5.775042ms for pod "coredns-66bc5c9577-6pm45" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:33:55.233134  329575 pod_ready.go:83] waiting for pod "etcd-flannel-931243" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:33:55.239142  329575 pod_ready.go:94] pod "etcd-flannel-931243" is "Ready"
	I1115 10:33:55.239172  329575 pod_ready.go:86] duration metric: took 6.012475ms for pod "etcd-flannel-931243" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:33:55.241720  329575 pod_ready.go:83] waiting for pod "kube-apiserver-flannel-931243" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:33:55.247206  329575 pod_ready.go:94] pod "kube-apiserver-flannel-931243" is "Ready"
	I1115 10:33:55.247235  329575 pod_ready.go:86] duration metric: took 5.49296ms for pod "kube-apiserver-flannel-931243" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:33:55.249383  329575 pod_ready.go:83] waiting for pod "kube-controller-manager-flannel-931243" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:33:55.626657  329575 pod_ready.go:94] pod "kube-controller-manager-flannel-931243" is "Ready"
	I1115 10:33:55.626690  329575 pod_ready.go:86] duration metric: took 377.28294ms for pod "kube-controller-manager-flannel-931243" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:33:55.825651  329575 pod_ready.go:83] waiting for pod "kube-proxy-4mgxz" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:33:56.226224  329575 pod_ready.go:94] pod "kube-proxy-4mgxz" is "Ready"
	I1115 10:33:56.226259  329575 pod_ready.go:86] duration metric: took 400.579139ms for pod "kube-proxy-4mgxz" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:33:56.426004  329575 pod_ready.go:83] waiting for pod "kube-scheduler-flannel-931243" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:33:56.825432  329575 pod_ready.go:94] pod "kube-scheduler-flannel-931243" is "Ready"
	I1115 10:33:56.825468  329575 pod_ready.go:86] duration metric: took 399.435435ms for pod "kube-scheduler-flannel-931243" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:33:56.825487  329575 pod_ready.go:40] duration metric: took 1.604161602s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:33:56.875396  329575 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:33:56.877148  329575 out.go:179] * Done! kubectl is now configured to use "flannel-931243" cluster and "default" namespace by default
	I1115 10:33:53.675891  337023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:54.175643  337023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:54.676489  337023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:55.176185  337023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:55.676180  337023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:56.175863  337023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:56.675746  337023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:57.176528  337023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:57.676327  337023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:33:57.753047  337023 kubeadm.go:1114] duration metric: took 11.754525661s to wait for elevateKubeSystemPrivileges
	I1115 10:33:57.753088  337023 kubeadm.go:403] duration metric: took 23.992134934s to StartCluster
	I1115 10:33:57.753112  337023 settings.go:142] acquiring lock: {Name:mk94cfac0f6eef2479181468a7a8082a6e5c8f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:33:57.753196  337023 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:33:57.755057  337023 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:33:57.755352  337023 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 10:33:57.755369  337023 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:33:57.755483  337023 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:33:57.755574  337023 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-087235"
	I1115 10:33:57.755593  337023 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-087235"
	I1115 10:33:57.755617  337023 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-087235"
	I1115 10:33:57.755630  337023 host.go:66] Checking if "old-k8s-version-087235" exists ...
	I1115 10:33:57.755633  337023 config.go:182] Loaded profile config "old-k8s-version-087235": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1115 10:33:57.755644  337023 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-087235"
	I1115 10:33:57.756070  337023 cli_runner.go:164] Run: docker container inspect old-k8s-version-087235 --format={{.State.Status}}
	I1115 10:33:57.756267  337023 cli_runner.go:164] Run: docker container inspect old-k8s-version-087235 --format={{.State.Status}}
	I1115 10:33:57.756834  337023 out.go:179] * Verifying Kubernetes components...
	I1115 10:33:57.758034  337023 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:33:57.780088  337023 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:33:57.781107  337023 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:33:57.781126  337023 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:33:57.781192  337023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-087235
	I1115 10:33:57.783492  337023 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-087235"
	I1115 10:33:57.783548  337023 host.go:66] Checking if "old-k8s-version-087235" exists ...
	I1115 10:33:57.784078  337023 cli_runner.go:164] Run: docker container inspect old-k8s-version-087235 --format={{.State.Status}}
	I1115 10:33:57.805994  337023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/old-k8s-version-087235/id_rsa Username:docker}
	I1115 10:33:57.807236  337023 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:33:57.807255  337023 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:33:57.807299  337023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-087235
	I1115 10:33:57.826483  337023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/old-k8s-version-087235/id_rsa Username:docker}
	I1115 10:33:58.042463  337023 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 10:33:58.055792  337023 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:33:58.156362  337023 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:33:53.724399  344850 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.494023319s)
	I1115 10:33:53.724432  344850 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1115 10:33:53.724479  344850 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1115 10:33:53.724540  344850 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1115 10:33:54.286607  344850 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1115 10:33:54.286654  344850 cache_images.go:125] Successfully loaded all cached images
	I1115 10:33:54.286661  344850 cache_images.go:94] duration metric: took 13.203920811s to LoadCachedImages
	I1115 10:33:54.286676  344850 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1115 10:33:54.286892  344850 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-283677 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-283677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:33:54.287043  344850 ssh_runner.go:195] Run: crio config
	I1115 10:33:54.337464  344850 cni.go:84] Creating CNI manager for ""
	I1115 10:33:54.337487  344850 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:33:54.337503  344850 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:33:54.337523  344850 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-283677 NodeName:no-preload-283677 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:33:54.337645  344850 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-283677"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:33:54.337723  344850 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:33:54.346467  344850 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1115 10:33:54.346523  344850 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1115 10:33:54.354449  344850 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1115 10:33:54.354520  344850 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1115 10:33:54.354551  344850 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21894-55448/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1115 10:33:54.354567  344850 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21894-55448/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1115 10:33:54.358434  344850 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1115 10:33:54.358464  344850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1115 10:33:55.429598  344850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:33:55.445757  344850 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1115 10:33:55.449896  344850 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1115 10:33:55.449934  344850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1115 10:33:55.518918  344850 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1115 10:33:55.525258  344850 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1115 10:33:55.525292  344850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1115 10:33:55.764465  344850 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:33:55.772584  344850 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1115 10:33:55.785227  344850 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:33:55.800652  344850 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1115 10:33:55.813266  344850 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:33:55.816978  344850 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:33:55.827754  344850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:33:55.913504  344850 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:33:55.938819  344850 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677 for IP: 192.168.76.2
	I1115 10:33:55.938842  344850 certs.go:195] generating shared ca certs ...
	I1115 10:33:55.938863  344850 certs.go:227] acquiring lock for ca certs: {Name:mkec8c0ab2b564f4fafe1f7fc86089efa994f6db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:33:55.939061  344850 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key
	I1115 10:33:55.939121  344850 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key
	I1115 10:33:55.939134  344850 certs.go:257] generating profile certs ...
	I1115 10:33:55.939220  344850 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/client.key
	I1115 10:33:55.939243  344850 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/client.crt with IP's: []
	I1115 10:33:56.170797  344850 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/client.crt ...
	I1115 10:33:56.170829  344850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/client.crt: {Name:mk5c1a7fd848fe80802a73309bc703b4054b3b85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:33:56.171017  344850 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/client.key ...
	I1115 10:33:56.171030  344850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/client.key: {Name:mk22d0e68c9875c9b8742603703a974da422f7ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:33:56.171131  344850 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/apiserver.key.d18d8ebf
	I1115 10:33:56.171147  344850 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/apiserver.crt.d18d8ebf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1115 10:33:56.364846  344850 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/apiserver.crt.d18d8ebf ...
	I1115 10:33:56.364883  344850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/apiserver.crt.d18d8ebf: {Name:mk159aacb055318dfa6a7d9f112015cd11c45703 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:33:56.365070  344850 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/apiserver.key.d18d8ebf ...
	I1115 10:33:56.365097  344850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/apiserver.key.d18d8ebf: {Name:mk8ee4ec09c040bdedc32126401c072ee6294f29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:33:56.365193  344850 certs.go:382] copying /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/apiserver.crt.d18d8ebf -> /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/apiserver.crt
	I1115 10:33:56.365283  344850 certs.go:386] copying /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/apiserver.key.d18d8ebf -> /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/apiserver.key
	I1115 10:33:56.365346  344850 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/proxy-client.key
	I1115 10:33:56.365361  344850 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/proxy-client.crt with IP's: []
	I1115 10:33:56.763059  344850 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/proxy-client.crt ...
	I1115 10:33:56.763095  344850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/proxy-client.crt: {Name:mk27deaffc00f83e2f989c073d250bb1bf8f9001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:33:56.763288  344850 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/proxy-client.key ...
	I1115 10:33:56.763306  344850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/proxy-client.key: {Name:mkedc095173814489f04d9ca62a50dd277a0c6ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:33:56.763492  344850 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem (1338 bytes)
	W1115 10:33:56.763530  344850 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962_empty.pem, impossibly tiny 0 bytes
	I1115 10:33:56.763538  344850 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:33:56.763559  344850 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:33:56.763584  344850 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:33:56.763605  344850 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem (1679 bytes)
	I1115 10:33:56.763655  344850 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:33:56.764237  344850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:33:56.784391  344850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:33:56.803705  344850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:33:56.821690  344850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:33:56.843520  344850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1115 10:33:56.862484  344850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 10:33:56.883054  344850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:33:56.906071  344850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 10:33:56.924472  344850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /usr/share/ca-certificates/589622.pem (1708 bytes)
	I1115 10:33:56.943690  344850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:33:56.961642  344850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem --> /usr/share/ca-certificates/58962.pem (1338 bytes)
	I1115 10:33:56.979927  344850 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:33:56.992545  344850 ssh_runner.go:195] Run: openssl version
	I1115 10:33:56.999035  344850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/589622.pem && ln -fs /usr/share/ca-certificates/589622.pem /etc/ssl/certs/589622.pem"
	I1115 10:33:57.008018  344850 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/589622.pem
	I1115 10:33:57.011965  344850 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:47 /usr/share/ca-certificates/589622.pem
	I1115 10:33:57.012028  344850 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/589622.pem
	I1115 10:33:57.051697  344850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/589622.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:33:57.064008  344850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:33:57.073061  344850 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:33:57.077108  344850 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:41 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:33:57.077159  344850 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:33:57.115574  344850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:33:57.125810  344850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/58962.pem && ln -fs /usr/share/ca-certificates/58962.pem /etc/ssl/certs/58962.pem"
	I1115 10:33:57.134590  344850 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/58962.pem
	I1115 10:33:57.139071  344850 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:47 /usr/share/ca-certificates/58962.pem
	I1115 10:33:57.139133  344850 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/58962.pem
	I1115 10:33:57.177747  344850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/58962.pem /etc/ssl/certs/51391683.0"
	I1115 10:33:57.187740  344850 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:33:57.191934  344850 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 10:33:57.192022  344850 kubeadm.go:401] StartCluster: {Name:no-preload-283677 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-283677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:33:57.192108  344850 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:33:57.192162  344850 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:33:57.223028  344850 cri.go:89] found id: ""
	I1115 10:33:57.223103  344850 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:33:57.231633  344850 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 10:33:57.240925  344850 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 10:33:57.241013  344850 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 10:33:57.250593  344850 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 10:33:57.250615  344850 kubeadm.go:158] found existing configuration files:
	
	I1115 10:33:57.250667  344850 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 10:33:57.259189  344850 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 10:33:57.259256  344850 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 10:33:57.267394  344850 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 10:33:57.275233  344850 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 10:33:57.275290  344850 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 10:33:57.283099  344850 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 10:33:57.291725  344850 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 10:33:57.291795  344850 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 10:33:57.300320  344850 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 10:33:57.308022  344850 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 10:33:57.308072  344850 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 10:33:57.315785  344850 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 10:33:57.384432  344850 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1115 10:33:57.384811  344850 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1115 10:33:57.451621  344850 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1115 10:33:55.151265  334883 pod_ready.go:104] pod "coredns-66bc5c9577-xzqds" is not "Ready", error: <nil>
	W1115 10:33:57.152042  334883 pod_ready.go:104] pod "coredns-66bc5c9577-xzqds" is not "Ready", error: <nil>
	I1115 10:33:58.235880  337023 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:33:58.954558  337023 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1115 10:33:59.236309  337023 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.180456788s)
	I1115 10:33:59.236345  337023 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.079938968s)
	I1115 10:33:59.236428  337023 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.00051629s)
	I1115 10:33:59.237815  337023 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-087235" to be "Ready" ...
	I1115 10:33:59.243589  337023 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1115 10:33:59.244579  337023 addons.go:515] duration metric: took 1.489091932s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1115 10:33:59.458929  337023 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-087235" context rescaled to 1 replicas
	W1115 10:34:01.242784  337023 node_ready.go:57] node "old-k8s-version-087235" has "Ready":"False" status (will retry)
	W1115 10:33:59.649565  334883 pod_ready.go:104] pod "coredns-66bc5c9577-xzqds" is not "Ready", error: <nil>
	W1115 10:34:01.650989  334883 pod_ready.go:104] pod "coredns-66bc5c9577-xzqds" is not "Ready", error: <nil>
	W1115 10:34:03.741288  337023 node_ready.go:57] node "old-k8s-version-087235" has "Ready":"False" status (will retry)
	W1115 10:34:05.742301  337023 node_ready.go:57] node "old-k8s-version-087235" has "Ready":"False" status (will retry)
	W1115 10:34:04.150077  334883 pod_ready.go:104] pod "coredns-66bc5c9577-xzqds" is not "Ready", error: <nil>
	W1115 10:34:06.650052  334883 pod_ready.go:104] pod "coredns-66bc5c9577-xzqds" is not "Ready", error: <nil>
	I1115 10:34:08.776502  344850 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 10:34:08.776564  344850 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 10:34:08.776685  344850 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 10:34:08.776783  344850 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1115 10:34:08.776831  344850 kubeadm.go:319] OS: Linux
	I1115 10:34:08.776921  344850 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 10:34:08.777003  344850 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1115 10:34:08.777062  344850 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 10:34:08.777119  344850 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 10:34:08.777176  344850 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 10:34:08.777236  344850 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 10:34:08.777296  344850 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 10:34:08.777362  344850 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 10:34:08.777415  344850 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1115 10:34:08.777504  344850 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 10:34:08.777613  344850 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 10:34:08.777712  344850 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 10:34:08.777784  344850 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1115 10:34:08.779278  344850 out.go:252]   - Generating certificates and keys ...
	I1115 10:34:08.779393  344850 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 10:34:08.779505  344850 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 10:34:08.779605  344850 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 10:34:08.779689  344850 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 10:34:08.779774  344850 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 10:34:08.779874  344850 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 10:34:08.779970  344850 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 10:34:08.780122  344850 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-283677] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1115 10:34:08.780196  344850 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 10:34:08.780361  344850 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-283677] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1115 10:34:08.780439  344850 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 10:34:08.780533  344850 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 10:34:08.780598  344850 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 10:34:08.780669  344850 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 10:34:08.780723  344850 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 10:34:08.780797  344850 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 10:34:08.780868  344850 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 10:34:08.780943  344850 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 10:34:08.781032  344850 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 10:34:08.781154  344850 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 10:34:08.781258  344850 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1115 10:34:08.782551  344850 out.go:252]   - Booting up control plane ...
	I1115 10:34:08.782622  344850 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 10:34:08.782689  344850 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 10:34:08.782746  344850 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 10:34:08.782833  344850 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 10:34:08.782908  344850 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1115 10:34:08.783030  344850 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1115 10:34:08.783119  344850 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 10:34:08.783168  344850 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 10:34:08.783291  344850 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1115 10:34:08.783382  344850 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1115 10:34:08.783459  344850 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.857831ms
	I1115 10:34:08.783566  344850 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 10:34:08.783641  344850 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1115 10:34:08.783715  344850 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 10:34:08.783792  344850 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1115 10:34:08.783862  344850 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.664658373s
	I1115 10:34:08.783919  344850 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.358531612s
	I1115 10:34:08.784040  344850 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.001317303s
	I1115 10:34:08.784132  344850 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 10:34:08.784231  344850 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 10:34:08.784296  344850 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 10:34:08.784473  344850 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-283677 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 10:34:08.784567  344850 kubeadm.go:319] [bootstrap-token] Using token: 8rxfcb.hxf26ytx6rdbmrd4
	I1115 10:34:08.785824  344850 out.go:252]   - Configuring RBAC rules ...
	I1115 10:34:08.785915  344850 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 10:34:08.786004  344850 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 10:34:08.786128  344850 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 10:34:08.786266  344850 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 10:34:08.786370  344850 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 10:34:08.786439  344850 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 10:34:08.786559  344850 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 10:34:08.786600  344850 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 10:34:08.786645  344850 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 10:34:08.786651  344850 kubeadm.go:319] 
	I1115 10:34:08.786698  344850 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 10:34:08.786703  344850 kubeadm.go:319] 
	I1115 10:34:08.786807  344850 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 10:34:08.786829  344850 kubeadm.go:319] 
	I1115 10:34:08.786883  344850 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 10:34:08.787015  344850 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 10:34:08.787084  344850 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 10:34:08.787093  344850 kubeadm.go:319] 
	I1115 10:34:08.787164  344850 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 10:34:08.787171  344850 kubeadm.go:319] 
	I1115 10:34:08.787216  344850 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 10:34:08.787222  344850 kubeadm.go:319] 
	I1115 10:34:08.787262  344850 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 10:34:08.787341  344850 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 10:34:08.787436  344850 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 10:34:08.787444  344850 kubeadm.go:319] 
	I1115 10:34:08.787543  344850 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 10:34:08.787644  344850 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 10:34:08.787653  344850 kubeadm.go:319] 
	I1115 10:34:08.787734  344850 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 8rxfcb.hxf26ytx6rdbmrd4 \
	I1115 10:34:08.787821  344850 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c958101d3a67868123c5e439a6c1ea6e30a99a6538b76cbe461e2c919cf1aec \
	I1115 10:34:08.787845  344850 kubeadm.go:319] 	--control-plane 
	I1115 10:34:08.787852  344850 kubeadm.go:319] 
	I1115 10:34:08.787917  344850 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 10:34:08.787924  344850 kubeadm.go:319] 
	I1115 10:34:08.788032  344850 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 8rxfcb.hxf26ytx6rdbmrd4 \
	I1115 10:34:08.788142  344850 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c958101d3a67868123c5e439a6c1ea6e30a99a6538b76cbe461e2c919cf1aec 
	I1115 10:34:08.788162  344850 cni.go:84] Creating CNI manager for ""
	I1115 10:34:08.788175  344850 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:34:08.789366  344850 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1115 10:34:08.241674  337023 node_ready.go:57] node "old-k8s-version-087235" has "Ready":"False" status (will retry)
	W1115 10:34:10.740671  337023 node_ready.go:57] node "old-k8s-version-087235" has "Ready":"False" status (will retry)
	W1115 10:34:12.742211  337023 node_ready.go:57] node "old-k8s-version-087235" has "Ready":"False" status (will retry)
	I1115 10:34:08.791345  344850 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 10:34:08.795866  344850 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1115 10:34:08.795882  344850 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 10:34:08.809497  344850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 10:34:09.014419  344850 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 10:34:09.014490  344850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:34:09.014581  344850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-283677 minikube.k8s.io/updated_at=2025_11_15T10_34_09_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510 minikube.k8s.io/name=no-preload-283677 minikube.k8s.io/primary=true
	I1115 10:34:09.156187  344850 ops.go:34] apiserver oom_adj: -16
	I1115 10:34:09.156338  344850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:34:09.657131  344850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:34:10.156777  344850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:34:10.656880  344850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:34:11.156395  344850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:34:11.657247  344850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:34:12.157080  344850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:34:12.657394  344850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:34:13.156609  344850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1115 10:34:08.650122  334883 pod_ready.go:104] pod "coredns-66bc5c9577-xzqds" is not "Ready", error: <nil>
	W1115 10:34:11.149912  334883 pod_ready.go:104] pod "coredns-66bc5c9577-xzqds" is not "Ready", error: <nil>
	W1115 10:34:13.150012  334883 pod_ready.go:104] pod "coredns-66bc5c9577-xzqds" is not "Ready", error: <nil>
	I1115 10:34:13.656455  344850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:34:13.741598  344850 kubeadm.go:1114] duration metric: took 4.727175968s to wait for elevateKubeSystemPrivileges
	I1115 10:34:13.741642  344850 kubeadm.go:403] duration metric: took 16.549624274s to StartCluster
	I1115 10:34:13.741666  344850 settings.go:142] acquiring lock: {Name:mk94cfac0f6eef2479181468a7a8082a6e5c8f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:34:13.741744  344850 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:34:13.743870  344850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:34:13.744138  344850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 10:34:13.744144  344850 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:34:13.744232  344850 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:34:13.744348  344850 addons.go:70] Setting storage-provisioner=true in profile "no-preload-283677"
	I1115 10:34:13.744367  344850 addons.go:239] Setting addon storage-provisioner=true in "no-preload-283677"
	I1115 10:34:13.744392  344850 config.go:182] Loaded profile config "no-preload-283677": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:34:13.744401  344850 host.go:66] Checking if "no-preload-283677" exists ...
	I1115 10:34:13.744379  344850 addons.go:70] Setting default-storageclass=true in profile "no-preload-283677"
	I1115 10:34:13.744426  344850 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-283677"
	I1115 10:34:13.744832  344850 cli_runner.go:164] Run: docker container inspect no-preload-283677 --format={{.State.Status}}
	I1115 10:34:13.745061  344850 cli_runner.go:164] Run: docker container inspect no-preload-283677 --format={{.State.Status}}
	I1115 10:34:13.745855  344850 out.go:179] * Verifying Kubernetes components...
	I1115 10:34:13.747381  344850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:34:13.773125  344850 addons.go:239] Setting addon default-storageclass=true in "no-preload-283677"
	I1115 10:34:13.773178  344850 host.go:66] Checking if "no-preload-283677" exists ...
	I1115 10:34:13.773655  344850 cli_runner.go:164] Run: docker container inspect no-preload-283677 --format={{.State.Status}}
	I1115 10:34:13.784146  344850 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:34:13.787455  344850 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:34:13.787481  344850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:34:13.787534  344850 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:34:13.793348  344850 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:34:13.793366  344850 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:34:13.793419  344850 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:34:13.806005  344850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/no-preload-283677/id_rsa Username:docker}
	I1115 10:34:13.812279  344850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/no-preload-283677/id_rsa Username:docker}
	I1115 10:34:14.041082  344850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 10:34:14.144335  344850 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:34:14.144420  344850 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:34:14.221223  344850 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:34:14.621522  344850 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1115 10:34:14.871815  344850 node_ready.go:35] waiting up to 6m0s for node "no-preload-283677" to be "Ready" ...
	I1115 10:34:14.872170  344850 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1115 10:34:13.240777  337023 node_ready.go:49] node "old-k8s-version-087235" is "Ready"
	I1115 10:34:13.240813  337023 node_ready.go:38] duration metric: took 14.002966758s for node "old-k8s-version-087235" to be "Ready" ...
	I1115 10:34:13.240831  337023 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:34:13.240891  337023 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:34:13.254988  337023 api_server.go:72] duration metric: took 15.499548481s to wait for apiserver process to appear ...
	I1115 10:34:13.255021  337023 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:34:13.255046  337023 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1115 10:34:13.260141  337023 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1115 10:34:13.261239  337023 api_server.go:141] control plane version: v1.28.0
	I1115 10:34:13.261265  337023 api_server.go:131] duration metric: took 6.237631ms to wait for apiserver health ...
	I1115 10:34:13.261273  337023 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:34:13.264720  337023 system_pods.go:59] 8 kube-system pods found
	I1115 10:34:13.264749  337023 system_pods.go:61] "coredns-5dd5756b68-bdpfv" [f9b5c9c2-d642-4a22-890d-89a8f91f771b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:34:13.264754  337023 system_pods.go:61] "etcd-old-k8s-version-087235" [ad6ea576-0a1a-4a8a-96c2-3076229520e9] Running
	I1115 10:34:13.264759  337023 system_pods.go:61] "kindnet-7btvm" [40ac7700-b07d-4504-8532-414d2fab7395] Running
	I1115 10:34:13.264762  337023 system_pods.go:61] "kube-apiserver-old-k8s-version-087235" [d035a8eb-e4b4-4eae-8726-c29fa6059168] Running
	I1115 10:34:13.264766  337023 system_pods.go:61] "kube-controller-manager-old-k8s-version-087235" [9cc04644-cddd-4b9c-964c-826ee56dbc47] Running
	I1115 10:34:13.264769  337023 system_pods.go:61] "kube-proxy-gl22j" [a854c189-3bd6-4c7d-8160-ae11b35db003] Running
	I1115 10:34:13.264775  337023 system_pods.go:61] "kube-scheduler-old-k8s-version-087235" [1da33e9b-b3a7-4e2b-b951-81bfc76bb515] Running
	I1115 10:34:13.264782  337023 system_pods.go:61] "storage-provisioner" [f2e47bd9-5a00-47cd-9b2e-5b80244c04a1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:34:13.264792  337023 system_pods.go:74] duration metric: took 3.51393ms to wait for pod list to return data ...
	I1115 10:34:13.264799  337023 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:34:13.266915  337023 default_sa.go:45] found service account: "default"
	I1115 10:34:13.266934  337023 default_sa.go:55] duration metric: took 2.127786ms for default service account to be created ...
	I1115 10:34:13.266942  337023 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:34:13.270886  337023 system_pods.go:86] 8 kube-system pods found
	I1115 10:34:13.271196  337023 system_pods.go:89] "coredns-5dd5756b68-bdpfv" [f9b5c9c2-d642-4a22-890d-89a8f91f771b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:34:13.271209  337023 system_pods.go:89] "etcd-old-k8s-version-087235" [ad6ea576-0a1a-4a8a-96c2-3076229520e9] Running
	I1115 10:34:13.271215  337023 system_pods.go:89] "kindnet-7btvm" [40ac7700-b07d-4504-8532-414d2fab7395] Running
	I1115 10:34:13.271220  337023 system_pods.go:89] "kube-apiserver-old-k8s-version-087235" [d035a8eb-e4b4-4eae-8726-c29fa6059168] Running
	I1115 10:34:13.271223  337023 system_pods.go:89] "kube-controller-manager-old-k8s-version-087235" [9cc04644-cddd-4b9c-964c-826ee56dbc47] Running
	I1115 10:34:13.271227  337023 system_pods.go:89] "kube-proxy-gl22j" [a854c189-3bd6-4c7d-8160-ae11b35db003] Running
	I1115 10:34:13.271230  337023 system_pods.go:89] "kube-scheduler-old-k8s-version-087235" [1da33e9b-b3a7-4e2b-b951-81bfc76bb515] Running
	I1115 10:34:13.271236  337023 system_pods.go:89] "storage-provisioner" [f2e47bd9-5a00-47cd-9b2e-5b80244c04a1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:34:13.271273  337023 retry.go:31] will retry after 305.757256ms: missing components: kube-dns
	I1115 10:34:13.582037  337023 system_pods.go:86] 8 kube-system pods found
	I1115 10:34:13.582066  337023 system_pods.go:89] "coredns-5dd5756b68-bdpfv" [f9b5c9c2-d642-4a22-890d-89a8f91f771b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:34:13.582073  337023 system_pods.go:89] "etcd-old-k8s-version-087235" [ad6ea576-0a1a-4a8a-96c2-3076229520e9] Running
	I1115 10:34:13.582080  337023 system_pods.go:89] "kindnet-7btvm" [40ac7700-b07d-4504-8532-414d2fab7395] Running
	I1115 10:34:13.582084  337023 system_pods.go:89] "kube-apiserver-old-k8s-version-087235" [d035a8eb-e4b4-4eae-8726-c29fa6059168] Running
	I1115 10:34:13.582089  337023 system_pods.go:89] "kube-controller-manager-old-k8s-version-087235" [9cc04644-cddd-4b9c-964c-826ee56dbc47] Running
	I1115 10:34:13.582092  337023 system_pods.go:89] "kube-proxy-gl22j" [a854c189-3bd6-4c7d-8160-ae11b35db003] Running
	I1115 10:34:13.582095  337023 system_pods.go:89] "kube-scheduler-old-k8s-version-087235" [1da33e9b-b3a7-4e2b-b951-81bfc76bb515] Running
	I1115 10:34:13.582102  337023 system_pods.go:89] "storage-provisioner" [f2e47bd9-5a00-47cd-9b2e-5b80244c04a1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:34:13.582117  337023 retry.go:31] will retry after 274.149323ms: missing components: kube-dns
	I1115 10:34:13.860410  337023 system_pods.go:86] 8 kube-system pods found
	I1115 10:34:13.860448  337023 system_pods.go:89] "coredns-5dd5756b68-bdpfv" [f9b5c9c2-d642-4a22-890d-89a8f91f771b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:34:13.860457  337023 system_pods.go:89] "etcd-old-k8s-version-087235" [ad6ea576-0a1a-4a8a-96c2-3076229520e9] Running
	I1115 10:34:13.860464  337023 system_pods.go:89] "kindnet-7btvm" [40ac7700-b07d-4504-8532-414d2fab7395] Running
	I1115 10:34:13.860469  337023 system_pods.go:89] "kube-apiserver-old-k8s-version-087235" [d035a8eb-e4b4-4eae-8726-c29fa6059168] Running
	I1115 10:34:13.860475  337023 system_pods.go:89] "kube-controller-manager-old-k8s-version-087235" [9cc04644-cddd-4b9c-964c-826ee56dbc47] Running
	I1115 10:34:13.860480  337023 system_pods.go:89] "kube-proxy-gl22j" [a854c189-3bd6-4c7d-8160-ae11b35db003] Running
	I1115 10:34:13.860486  337023 system_pods.go:89] "kube-scheduler-old-k8s-version-087235" [1da33e9b-b3a7-4e2b-b951-81bfc76bb515] Running
	I1115 10:34:13.860493  337023 system_pods.go:89] "storage-provisioner" [f2e47bd9-5a00-47cd-9b2e-5b80244c04a1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:34:13.860513  337023 retry.go:31] will retry after 471.080391ms: missing components: kube-dns
	I1115 10:34:14.336325  337023 system_pods.go:86] 8 kube-system pods found
	I1115 10:34:14.336360  337023 system_pods.go:89] "coredns-5dd5756b68-bdpfv" [f9b5c9c2-d642-4a22-890d-89a8f91f771b] Running
	I1115 10:34:14.336369  337023 system_pods.go:89] "etcd-old-k8s-version-087235" [ad6ea576-0a1a-4a8a-96c2-3076229520e9] Running
	I1115 10:34:14.336374  337023 system_pods.go:89] "kindnet-7btvm" [40ac7700-b07d-4504-8532-414d2fab7395] Running
	I1115 10:34:14.336383  337023 system_pods.go:89] "kube-apiserver-old-k8s-version-087235" [d035a8eb-e4b4-4eae-8726-c29fa6059168] Running
	I1115 10:34:14.336389  337023 system_pods.go:89] "kube-controller-manager-old-k8s-version-087235" [9cc04644-cddd-4b9c-964c-826ee56dbc47] Running
	I1115 10:34:14.336396  337023 system_pods.go:89] "kube-proxy-gl22j" [a854c189-3bd6-4c7d-8160-ae11b35db003] Running
	I1115 10:34:14.336401  337023 system_pods.go:89] "kube-scheduler-old-k8s-version-087235" [1da33e9b-b3a7-4e2b-b951-81bfc76bb515] Running
	I1115 10:34:14.336406  337023 system_pods.go:89] "storage-provisioner" [f2e47bd9-5a00-47cd-9b2e-5b80244c04a1] Running
	I1115 10:34:14.336420  337023 system_pods.go:126] duration metric: took 1.069473333s to wait for k8s-apps to be running ...
	I1115 10:34:14.336433  337023 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:34:14.336491  337023 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:34:14.352799  337023 system_svc.go:56] duration metric: took 16.353825ms WaitForService to wait for kubelet
	I1115 10:34:14.352837  337023 kubeadm.go:587] duration metric: took 16.59743319s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:34:14.352862  337023 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:34:14.356252  337023 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:34:14.356281  337023 node_conditions.go:123] node cpu capacity is 8
	I1115 10:34:14.356295  337023 node_conditions.go:105] duration metric: took 3.427337ms to run NodePressure ...
	I1115 10:34:14.356307  337023 start.go:242] waiting for startup goroutines ...
	I1115 10:34:14.356316  337023 start.go:247] waiting for cluster config update ...
	I1115 10:34:14.356330  337023 start.go:256] writing updated cluster config ...
	I1115 10:34:14.356585  337023 ssh_runner.go:195] Run: rm -f paused
	I1115 10:34:14.360788  337023 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:34:14.365480  337023 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-bdpfv" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:14.369803  337023 pod_ready.go:94] pod "coredns-5dd5756b68-bdpfv" is "Ready"
	I1115 10:34:14.369823  337023 pod_ready.go:86] duration metric: took 4.322204ms for pod "coredns-5dd5756b68-bdpfv" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:14.372340  337023 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-087235" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:14.376502  337023 pod_ready.go:94] pod "etcd-old-k8s-version-087235" is "Ready"
	I1115 10:34:14.376524  337023 pod_ready.go:86] duration metric: took 4.163941ms for pod "etcd-old-k8s-version-087235" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:14.379197  337023 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-087235" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:14.383053  337023 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-087235" is "Ready"
	I1115 10:34:14.383078  337023 pod_ready.go:86] duration metric: took 3.860879ms for pod "kube-apiserver-old-k8s-version-087235" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:14.385489  337023 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-087235" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:14.765782  337023 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-087235" is "Ready"
	I1115 10:34:14.765811  337023 pod_ready.go:86] duration metric: took 380.299943ms for pod "kube-controller-manager-old-k8s-version-087235" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:14.966487  337023 pod_ready.go:83] waiting for pod "kube-proxy-gl22j" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:15.365418  337023 pod_ready.go:94] pod "kube-proxy-gl22j" is "Ready"
	I1115 10:34:15.365443  337023 pod_ready.go:86] duration metric: took 398.932273ms for pod "kube-proxy-gl22j" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:15.566908  337023 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-087235" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:15.967460  337023 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-087235" is "Ready"
	I1115 10:34:15.967492  337023 pod_ready.go:86] duration metric: took 400.555012ms for pod "kube-scheduler-old-k8s-version-087235" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:15.967506  337023 pod_ready.go:40] duration metric: took 1.606683444s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:34:16.020812  337023 start.go:628] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1115 10:34:16.023436  337023 out.go:203] 
	W1115 10:34:16.024932  337023 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1115 10:34:16.026402  337023 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1115 10:34:16.028915  337023 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-087235" cluster and "default" namespace by default
	I1115 10:34:14.874202  344850 addons.go:515] duration metric: took 1.129979184s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1115 10:34:15.125661  344850 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-283677" context rescaled to 1 replicas
	W1115 10:34:16.874943  344850 node_ready.go:57] node "no-preload-283677" has "Ready":"False" status (will retry)
	W1115 10:34:15.150285  334883 pod_ready.go:104] pod "coredns-66bc5c9577-xzqds" is not "Ready", error: <nil>
	W1115 10:34:17.150937  334883 pod_ready.go:104] pod "coredns-66bc5c9577-xzqds" is not "Ready", error: <nil>
	I1115 10:34:19.399392  334883 pod_ready.go:94] pod "coredns-66bc5c9577-xzqds" is "Ready"
	I1115 10:34:19.399427  334883 pod_ready.go:86] duration metric: took 32.754986973s for pod "coredns-66bc5c9577-xzqds" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:19.402441  334883 pod_ready.go:83] waiting for pod "etcd-bridge-931243" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:19.406428  334883 pod_ready.go:94] pod "etcd-bridge-931243" is "Ready"
	I1115 10:34:19.406450  334883 pod_ready.go:86] duration metric: took 3.983641ms for pod "etcd-bridge-931243" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:19.408321  334883 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-931243" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:19.411841  334883 pod_ready.go:94] pod "kube-apiserver-bridge-931243" is "Ready"
	I1115 10:34:19.411865  334883 pod_ready.go:86] duration metric: took 3.51618ms for pod "kube-apiserver-bridge-931243" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:19.413704  334883 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-931243" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:19.417184  334883 pod_ready.go:94] pod "kube-controller-manager-bridge-931243" is "Ready"
	I1115 10:34:19.417204  334883 pod_ready.go:86] duration metric: took 3.479687ms for pod "kube-controller-manager-bridge-931243" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:19.548348  334883 pod_ready.go:83] waiting for pod "kube-proxy-66f22" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:19.948363  334883 pod_ready.go:94] pod "kube-proxy-66f22" is "Ready"
	I1115 10:34:19.948400  334883 pod_ready.go:86] duration metric: took 400.02426ms for pod "kube-proxy-66f22" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:20.148717  334883 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-931243" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:20.550114  334883 pod_ready.go:94] pod "kube-scheduler-bridge-931243" is "Ready"
	I1115 10:34:20.550148  334883 pod_ready.go:86] duration metric: took 401.403123ms for pod "kube-scheduler-bridge-931243" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:20.550166  334883 pod_ready.go:40] duration metric: took 33.908914915s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:34:20.600440  334883 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:34:20.602230  334883 out.go:179] * Done! kubectl is now configured to use "bridge-931243" cluster and "default" namespace by default
	W1115 10:34:19.375113  344850 node_ready.go:57] node "no-preload-283677" has "Ready":"False" status (will retry)
	W1115 10:34:21.875807  344850 node_ready.go:57] node "no-preload-283677" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 15 10:34:13 old-k8s-version-087235 crio[891]: time="2025-11-15T10:34:13.887099124Z" level=info msg="Created container 80140f8615ea80c986d60d8fb8c9a38e8929d31d987768d2260dc605ce329ac5: kube-system/coredns-5dd5756b68-bdpfv/coredns" id=3752a7ba-d614-4e3c-8c28-3e7dd2635e80 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:34:13 old-k8s-version-087235 crio[891]: time="2025-11-15T10:34:13.887506756Z" level=info msg="Starting container: 80140f8615ea80c986d60d8fb8c9a38e8929d31d987768d2260dc605ce329ac5" id=fce7b469-0c96-47ef-a300-ff47d2a563f7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:34:13 old-k8s-version-087235 crio[891]: time="2025-11-15T10:34:13.889348902Z" level=info msg="Started container" PID=2266 containerID=80140f8615ea80c986d60d8fb8c9a38e8929d31d987768d2260dc605ce329ac5 description=kube-system/coredns-5dd5756b68-bdpfv/coredns id=fce7b469-0c96-47ef-a300-ff47d2a563f7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d8cea75e3258833d929f9316af2b0552427460de7a72f9e76837d6b95d238653
	Nov 15 10:34:16 old-k8s-version-087235 crio[891]: time="2025-11-15T10:34:16.492005452Z" level=info msg="Running pod sandbox: default/busybox/POD" id=50c29008-f419-44a6-8108-730e9d294fdf name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:34:16 old-k8s-version-087235 crio[891]: time="2025-11-15T10:34:16.492112698Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:34:16 old-k8s-version-087235 crio[891]: time="2025-11-15T10:34:16.499866928Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:437d8c9aec1d1fd6c882b2d61d30834cbc61a33d386ab24213ae493f22325814 UID:99afc046-339f-4b7b-a19f-e6b0a2bbf831 NetNS:/var/run/netns/170a9c4e-a724-427a-a395-8eec52b4a669 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0006f6738}] Aliases:map[]}"
	Nov 15 10:34:16 old-k8s-version-087235 crio[891]: time="2025-11-15T10:34:16.499904925Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 15 10:34:16 old-k8s-version-087235 crio[891]: time="2025-11-15T10:34:16.51066839Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:437d8c9aec1d1fd6c882b2d61d30834cbc61a33d386ab24213ae493f22325814 UID:99afc046-339f-4b7b-a19f-e6b0a2bbf831 NetNS:/var/run/netns/170a9c4e-a724-427a-a395-8eec52b4a669 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0006f6738}] Aliases:map[]}"
	Nov 15 10:34:16 old-k8s-version-087235 crio[891]: time="2025-11-15T10:34:16.510838944Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 15 10:34:16 old-k8s-version-087235 crio[891]: time="2025-11-15T10:34:16.513990215Z" level=info msg="Ran pod sandbox 437d8c9aec1d1fd6c882b2d61d30834cbc61a33d386ab24213ae493f22325814 with infra container: default/busybox/POD" id=50c29008-f419-44a6-8108-730e9d294fdf name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:34:16 old-k8s-version-087235 crio[891]: time="2025-11-15T10:34:16.515325742Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=25d99e40-c3f3-40c4-8ec2-64beb1dc9405 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:34:16 old-k8s-version-087235 crio[891]: time="2025-11-15T10:34:16.515480443Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=25d99e40-c3f3-40c4-8ec2-64beb1dc9405 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:34:16 old-k8s-version-087235 crio[891]: time="2025-11-15T10:34:16.515527468Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=25d99e40-c3f3-40c4-8ec2-64beb1dc9405 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:34:16 old-k8s-version-087235 crio[891]: time="2025-11-15T10:34:16.516109405Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=37be837e-23cf-4bb2-ab87-b765ac778219 name=/runtime.v1.ImageService/PullImage
	Nov 15 10:34:16 old-k8s-version-087235 crio[891]: time="2025-11-15T10:34:16.517605729Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 15 10:34:21 old-k8s-version-087235 crio[891]: time="2025-11-15T10:34:21.13858162Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=37be837e-23cf-4bb2-ab87-b765ac778219 name=/runtime.v1.ImageService/PullImage
	Nov 15 10:34:21 old-k8s-version-087235 crio[891]: time="2025-11-15T10:34:21.139583846Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6100824a-8105-4984-9dff-a09a24ebf156 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:34:21 old-k8s-version-087235 crio[891]: time="2025-11-15T10:34:21.141185034Z" level=info msg="Creating container: default/busybox/busybox" id=fede0500-c4ea-4047-8998-fc3435af0554 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:34:21 old-k8s-version-087235 crio[891]: time="2025-11-15T10:34:21.141327232Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:34:21 old-k8s-version-087235 crio[891]: time="2025-11-15T10:34:21.146163755Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:34:21 old-k8s-version-087235 crio[891]: time="2025-11-15T10:34:21.146752395Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:34:21 old-k8s-version-087235 crio[891]: time="2025-11-15T10:34:21.174318511Z" level=info msg="Created container 182d1209ef0ee44cdd62dcc4252557a41589572a0f4ba8ac3939e0a2e473f008: default/busybox/busybox" id=fede0500-c4ea-4047-8998-fc3435af0554 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:34:21 old-k8s-version-087235 crio[891]: time="2025-11-15T10:34:21.176099949Z" level=info msg="Starting container: 182d1209ef0ee44cdd62dcc4252557a41589572a0f4ba8ac3939e0a2e473f008" id=68fe3f9f-cb72-4ffa-8587-8d2fe51f9370 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:34:21 old-k8s-version-087235 crio[891]: time="2025-11-15T10:34:21.178474274Z" level=info msg="Started container" PID=2341 containerID=182d1209ef0ee44cdd62dcc4252557a41589572a0f4ba8ac3939e0a2e473f008 description=default/busybox/busybox id=68fe3f9f-cb72-4ffa-8587-8d2fe51f9370 name=/runtime.v1.RuntimeService/StartContainer sandboxID=437d8c9aec1d1fd6c882b2d61d30834cbc61a33d386ab24213ae493f22325814
	Nov 15 10:34:27 old-k8s-version-087235 crio[891]: time="2025-11-15T10:34:27.279690475Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	182d1209ef0ee       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   437d8c9aec1d1       busybox                                          default
	80140f8615ea8       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      14 seconds ago      Running             coredns                   0                   d8cea75e32588       coredns-5dd5756b68-bdpfv                         kube-system
	f3ebd840df618       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      14 seconds ago      Running             storage-provisioner       0                   03b0ed353ec0b       storage-provisioner                              kube-system
	37511b2b10688       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    26 seconds ago      Running             kindnet-cni               0                   79376e00cecfc       kindnet-7btvm                                    kube-system
	547b3486062ef       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      30 seconds ago      Running             kube-proxy                0                   0edf1aa03eb23       kube-proxy-gl22j                                 kube-system
	cd1348dd2cd5d       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      49 seconds ago      Running             etcd                      0                   cc141b27c866e       etcd-old-k8s-version-087235                      kube-system
	497dcc920b51a       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      49 seconds ago      Running             kube-apiserver            0                   97d81345ea340       kube-apiserver-old-k8s-version-087235            kube-system
	2102c587e5a3c       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      49 seconds ago      Running             kube-controller-manager   0                   cee3d29586543       kube-controller-manager-old-k8s-version-087235   kube-system
	4cfcbb43064f4       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      49 seconds ago      Running             kube-scheduler            0                   ee62df0b3dfd9       kube-scheduler-old-k8s-version-087235            kube-system
	
	
	==> coredns [80140f8615ea80c986d60d8fb8c9a38e8929d31d987768d2260dc605ce329ac5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55388 - 57217 "HINFO IN 5283269690358568296.4566678544437899000. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.013772544s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-087235
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-087235
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=old-k8s-version-087235
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_33_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:33:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-087235
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:34:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:34:15 +0000   Sat, 15 Nov 2025 10:33:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:34:15 +0000   Sat, 15 Nov 2025 10:33:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:34:15 +0000   Sat, 15 Nov 2025 10:33:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:34:15 +0000   Sat, 15 Nov 2025 10:34:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-087235
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                fdfc6964-6bf8-45b6-8dd6-3b0bdf50e4d6
	  Boot ID:                    6a33f504-3a8c-4d49-8fc5-301cf5275efc
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-5dd5756b68-bdpfv                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     30s
	  kube-system                 etcd-old-k8s-version-087235                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         44s
	  kube-system                 kindnet-7btvm                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-old-k8s-version-087235             250m (3%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-controller-manager-old-k8s-version-087235    200m (2%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-proxy-gl22j                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-old-k8s-version-087235             100m (1%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 30s                kube-proxy       
	  Normal  NodeHasSufficientMemory  50s (x8 over 50s)  kubelet          Node old-k8s-version-087235 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    50s (x8 over 50s)  kubelet          Node old-k8s-version-087235 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     50s (x8 over 50s)  kubelet          Node old-k8s-version-087235 status is now: NodeHasSufficientPID
	  Normal  Starting                 44s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  44s                kubelet          Node old-k8s-version-087235 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    44s                kubelet          Node old-k8s-version-087235 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     44s                kubelet          Node old-k8s-version-087235 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           31s                node-controller  Node old-k8s-version-087235 event: Registered Node old-k8s-version-087235 in Controller
	  Normal  NodeReady                15s                kubelet          Node old-k8s-version-087235 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000027] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[ +32.253211] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: a2 eb 56 8a 52 fc 0e 1b 58 2c a0 18 08 00
	[Nov15 10:32] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 7f 53 f9 15 93 08 06
	[  +0.017821] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff c6 66 25 e9 45 28 08 06
	[ +19.178294] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 a3 65 b9 28 15 08 06
	[  +0.347929] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 6d df 33 a5 ad 08 06
	[  +0.000319] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff a2 7f 53 f9 15 93 08 06
	[Nov15 10:33] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 96 ed 10 e8 9a 80 08 06
	[  +0.000481] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 a3 65 b9 28 15 08 06
	[ +35.214217] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 c2 9a 56 52 0c 08 06
	[  +9.104720] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 c2 c4 d1 7c dd 08 06
	[Nov15 10:34] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 68 1e 80 53 05 08 06
	[  +0.000444] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 c2 c4 d1 7c dd 08 06
	
	
	==> etcd [cd1348dd2cd5d34fb926d711ea60f7c889749202db5dc4614f327011b54038a3] <==
	{"level":"info","ts":"2025-11-15T10:33:39.375164Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-11-15T10:33:39.37522Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-11-15T10:33:39.375223Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-15T10:33:39.375253Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-15T10:33:39.957023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-15T10:33:39.957064Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-15T10:33:39.957079Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 1"}
	{"level":"info","ts":"2025-11-15T10:33:39.957091Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2025-11-15T10:33:39.957096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-11-15T10:33:39.957104Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2025-11-15T10:33:39.957111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-11-15T10:33:39.957816Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-15T10:33:39.95831Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-087235 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-15T10:33:39.958308Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-15T10:33:39.958337Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-15T10:33:39.958757Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-15T10:33:39.958862Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-15T10:33:39.9596Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-15T10:33:39.960517Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-15T10:33:39.960614Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-15T10:33:39.961222Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-11-15T10:33:39.961842Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-15T10:33:42.642966Z","caller":"traceutil/trace.go:171","msg":"trace[554095755] transaction","detail":"{read_only:false; response_revision:61; number_of_response:1; }","duration":"152.226285ms","start":"2025-11-15T10:33:42.490706Z","end":"2025-11-15T10:33:42.642932Z","steps":["trace[554095755] 'process raft request'  (duration: 152.099704ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-15T10:33:42.643415Z","caller":"traceutil/trace.go:171","msg":"trace[1452350486] transaction","detail":"{read_only:false; response_revision:60; number_of_response:1; }","duration":"152.512532ms","start":"2025-11-15T10:33:42.490326Z","end":"2025-11-15T10:33:42.642838Z","steps":["trace[1452350486] 'process raft request'  (duration: 129.824727ms)","trace[1452350486] 'compare'  (duration: 22.492802ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-15T10:33:42.812295Z","caller":"traceutil/trace.go:171","msg":"trace[1548384559] transaction","detail":"{read_only:false; response_revision:66; number_of_response:1; }","duration":"146.446135ms","start":"2025-11-15T10:33:42.665822Z","end":"2025-11-15T10:33:42.812268Z","steps":["trace[1548384559] 'process raft request'  (duration: 143.616675ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:34:28 up  2:16,  0 user,  load average: 6.20, 4.64, 2.67
	Linux old-k8s-version-087235 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [37511b2b10688ecbe423ec6daba9cf92aa14e0bce28e787e3797555477af5d82] <==
	I1115 10:34:02.398479       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:34:02.398697       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1115 10:34:02.398844       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:34:02.398864       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:34:02.398886       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:34:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:34:02.693011       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:34:02.693117       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:34:02.693132       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:34:02.693565       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1115 10:34:02.993737       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:34:02.993771       1 metrics.go:72] Registering metrics
	I1115 10:34:02.993877       1 controller.go:711] "Syncing nftables rules"
	I1115 10:34:12.693694       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1115 10:34:12.693758       1 main.go:301] handling current node
	I1115 10:34:22.693665       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1115 10:34:22.693706       1 main.go:301] handling current node
	
	
	==> kube-apiserver [497dcc920b51a003adc572d4e3a410345d47d03aee919838ca67220da653d54a] <==
	I1115 10:33:41.939820       1 aggregator.go:166] initial CRD sync complete...
	I1115 10:33:41.939869       1 autoregister_controller.go:141] Starting autoregister controller
	I1115 10:33:41.939907       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 10:33:41.939933       1 cache.go:39] Caches are synced for autoregister controller
	I1115 10:33:41.939997       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1115 10:33:41.957216       1 controller.go:624] quota admission added evaluator for: namespaces
	I1115 10:33:41.967554       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	E1115 10:33:42.048851       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E1115 10:33:42.049129       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I1115 10:33:42.318857       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 10:33:42.816610       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1115 10:33:42.825298       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1115 10:33:42.825325       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:33:43.282497       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:33:43.320454       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:33:43.459442       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1115 10:33:43.467173       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1115 10:33:43.468645       1 controller.go:624] quota admission added evaluator for: endpoints
	I1115 10:33:43.472946       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 10:33:43.874083       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1115 10:33:44.804628       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1115 10:33:44.814040       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1115 10:33:44.823064       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1115 10:33:57.838588       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1115 10:33:58.047182       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [2102c587e5a3c2811c4c181745d331825e0944258d8a2db4bc180c26c77259a7] <==
	I1115 10:33:57.345302       1 shared_informer.go:318] Caches are synced for resource quota
	I1115 10:33:57.661224       1 shared_informer.go:318] Caches are synced for garbage collector
	I1115 10:33:57.690501       1 shared_informer.go:318] Caches are synced for garbage collector
	I1115 10:33:57.690534       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1115 10:33:57.852822       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1115 10:33:58.059292       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-7btvm"
	I1115 10:33:58.061530       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-gl22j"
	I1115 10:33:58.243569       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-m947x"
	I1115 10:33:58.253469       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-bdpfv"
	I1115 10:33:58.263137       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="412.467803ms"
	I1115 10:33:58.342127       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="78.922296ms"
	I1115 10:33:58.358312       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.119765ms"
	I1115 10:33:58.358486       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="109.408µs"
	I1115 10:33:58.983661       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1115 10:33:59.038742       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-m947x"
	I1115 10:33:59.048806       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="65.99333ms"
	I1115 10:33:59.059370       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.504232ms"
	I1115 10:33:59.059508       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="95.992µs"
	I1115 10:33:59.059567       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="35.697µs"
	I1115 10:34:13.229159       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="109.435µs"
	I1115 10:34:13.241089       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="163.17µs"
	I1115 10:34:14.091683       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="128.102µs"
	I1115 10:34:14.108076       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.833736ms"
	I1115 10:34:14.108305       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="135.532µs"
	I1115 10:34:17.202414       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [547b3486062ef4c2268d1d0ddc54a5f2a6f86c4ffcb5d796b43924073758d6b6] <==
	I1115 10:33:58.740099       1 server_others.go:69] "Using iptables proxy"
	I1115 10:33:58.755370       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1115 10:33:58.852157       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:33:58.855671       1 server_others.go:152] "Using iptables Proxier"
	I1115 10:33:58.855780       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1115 10:33:58.855815       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1115 10:33:58.855880       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1115 10:33:58.856687       1 server.go:846] "Version info" version="v1.28.0"
	I1115 10:33:58.856751       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:33:58.857849       1 config.go:188] "Starting service config controller"
	I1115 10:33:58.859303       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1115 10:33:58.858630       1 config.go:97] "Starting endpoint slice config controller"
	I1115 10:33:58.859344       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1115 10:33:58.859200       1 config.go:315] "Starting node config controller"
	I1115 10:33:58.859354       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1115 10:33:58.959506       1 shared_informer.go:318] Caches are synced for service config
	I1115 10:33:58.959539       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1115 10:33:58.959486       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [4cfcbb43064f4a20fc4c0c0e7d45f463586f411a00b4c05f9fd3d534d5536d52] <==
	W1115 10:33:42.054420       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1115 10:33:42.054451       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1115 10:33:42.054567       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1115 10:33:42.054598       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1115 10:33:42.054676       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1115 10:33:42.054694       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1115 10:33:42.054775       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1115 10:33:42.054793       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1115 10:33:42.055166       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1115 10:33:42.055226       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1115 10:33:42.055626       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1115 10:33:42.055676       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1115 10:33:42.056259       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1115 10:33:42.058408       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1115 10:33:42.783001       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1115 10:33:42.783032       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1115 10:33:42.953859       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1115 10:33:42.954012       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1115 10:33:43.095978       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1115 10:33:43.096019       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1115 10:33:43.116595       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1115 10:33:43.116634       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1115 10:33:43.273473       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1115 10:33:43.273515       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1115 10:33:45.542307       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 15 10:33:58 old-k8s-version-087235 kubelet[1514]: I1115 10:33:58.157962    1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz87r\" (UniqueName: \"kubernetes.io/projected/a854c189-3bd6-4c7d-8160-ae11b35db003-kube-api-access-sz87r\") pod \"kube-proxy-gl22j\" (UID: \"a854c189-3bd6-4c7d-8160-ae11b35db003\") " pod="kube-system/kube-proxy-gl22j"
	Nov 15 10:33:58 old-k8s-version-087235 kubelet[1514]: I1115 10:33:58.158034    1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40ac7700-b07d-4504-8532-414d2fab7395-xtables-lock\") pod \"kindnet-7btvm\" (UID: \"40ac7700-b07d-4504-8532-414d2fab7395\") " pod="kube-system/kindnet-7btvm"
	Nov 15 10:33:58 old-k8s-version-087235 kubelet[1514]: I1115 10:33:58.158072    1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a854c189-3bd6-4c7d-8160-ae11b35db003-kube-proxy\") pod \"kube-proxy-gl22j\" (UID: \"a854c189-3bd6-4c7d-8160-ae11b35db003\") " pod="kube-system/kube-proxy-gl22j"
	Nov 15 10:33:58 old-k8s-version-087235 kubelet[1514]: I1115 10:33:58.158111    1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a854c189-3bd6-4c7d-8160-ae11b35db003-xtables-lock\") pod \"kube-proxy-gl22j\" (UID: \"a854c189-3bd6-4c7d-8160-ae11b35db003\") " pod="kube-system/kube-proxy-gl22j"
	Nov 15 10:33:58 old-k8s-version-087235 kubelet[1514]: I1115 10:33:58.158144    1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdxjd\" (UniqueName: \"kubernetes.io/projected/40ac7700-b07d-4504-8532-414d2fab7395-kube-api-access-cdxjd\") pod \"kindnet-7btvm\" (UID: \"40ac7700-b07d-4504-8532-414d2fab7395\") " pod="kube-system/kindnet-7btvm"
	Nov 15 10:33:58 old-k8s-version-087235 kubelet[1514]: I1115 10:33:58.158180    1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a854c189-3bd6-4c7d-8160-ae11b35db003-lib-modules\") pod \"kube-proxy-gl22j\" (UID: \"a854c189-3bd6-4c7d-8160-ae11b35db003\") " pod="kube-system/kube-proxy-gl22j"
	Nov 15 10:33:58 old-k8s-version-087235 kubelet[1514]: I1115 10:33:58.158211    1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/40ac7700-b07d-4504-8532-414d2fab7395-cni-cfg\") pod \"kindnet-7btvm\" (UID: \"40ac7700-b07d-4504-8532-414d2fab7395\") " pod="kube-system/kindnet-7btvm"
	Nov 15 10:33:58 old-k8s-version-087235 kubelet[1514]: I1115 10:33:58.158239    1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40ac7700-b07d-4504-8532-414d2fab7395-lib-modules\") pod \"kindnet-7btvm\" (UID: \"40ac7700-b07d-4504-8532-414d2fab7395\") " pod="kube-system/kindnet-7btvm"
	Nov 15 10:33:58 old-k8s-version-087235 kubelet[1514]: W1115 10:33:58.458302    1514 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/3d4715b4872d2f4603f13f0191a01c4322a26432e1316c2ae62b1c571bea1814/crio-79376e00cecfc4cf56981a622fd4df6fec8cbe3ed0be1612949242490bba1477 WatchSource:0}: Error finding container 79376e00cecfc4cf56981a622fd4df6fec8cbe3ed0be1612949242490bba1477: Status 404 returned error can't find the container with id 79376e00cecfc4cf56981a622fd4df6fec8cbe3ed0be1612949242490bba1477
	Nov 15 10:34:03 old-k8s-version-087235 kubelet[1514]: I1115 10:34:03.070649    1514 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-gl22j" podStartSLOduration=5.070583408 podCreationTimestamp="2025-11-15 10:33:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:33:59.061896266 +0000 UTC m=+14.285579916" watchObservedRunningTime="2025-11-15 10:34:03.070583408 +0000 UTC m=+18.294267058"
	Nov 15 10:34:03 old-k8s-version-087235 kubelet[1514]: I1115 10:34:03.070942    1514 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-7btvm" podStartSLOduration=1.3348952459999999 podCreationTimestamp="2025-11-15 10:33:58 +0000 UTC" firstStartedPulling="2025-11-15 10:33:58.461992213 +0000 UTC m=+13.685675856" lastFinishedPulling="2025-11-15 10:34:02.198006299 +0000 UTC m=+17.421689932" observedRunningTime="2025-11-15 10:34:03.070204008 +0000 UTC m=+18.293887658" watchObservedRunningTime="2025-11-15 10:34:03.070909322 +0000 UTC m=+18.294592972"
	Nov 15 10:34:13 old-k8s-version-087235 kubelet[1514]: I1115 10:34:13.206642    1514 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 15 10:34:13 old-k8s-version-087235 kubelet[1514]: I1115 10:34:13.229379    1514 topology_manager.go:215] "Topology Admit Handler" podUID="f9b5c9c2-d642-4a22-890d-89a8f91f771b" podNamespace="kube-system" podName="coredns-5dd5756b68-bdpfv"
	Nov 15 10:34:13 old-k8s-version-087235 kubelet[1514]: I1115 10:34:13.231530    1514 topology_manager.go:215] "Topology Admit Handler" podUID="f2e47bd9-5a00-47cd-9b2e-5b80244c04a1" podNamespace="kube-system" podName="storage-provisioner"
	Nov 15 10:34:13 old-k8s-version-087235 kubelet[1514]: I1115 10:34:13.420742    1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6b2c\" (UniqueName: \"kubernetes.io/projected/f9b5c9c2-d642-4a22-890d-89a8f91f771b-kube-api-access-t6b2c\") pod \"coredns-5dd5756b68-bdpfv\" (UID: \"f9b5c9c2-d642-4a22-890d-89a8f91f771b\") " pod="kube-system/coredns-5dd5756b68-bdpfv"
	Nov 15 10:34:13 old-k8s-version-087235 kubelet[1514]: I1115 10:34:13.420808    1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f2e47bd9-5a00-47cd-9b2e-5b80244c04a1-tmp\") pod \"storage-provisioner\" (UID: \"f2e47bd9-5a00-47cd-9b2e-5b80244c04a1\") " pod="kube-system/storage-provisioner"
	Nov 15 10:34:13 old-k8s-version-087235 kubelet[1514]: I1115 10:34:13.420835    1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwwq5\" (UniqueName: \"kubernetes.io/projected/f2e47bd9-5a00-47cd-9b2e-5b80244c04a1-kube-api-access-pwwq5\") pod \"storage-provisioner\" (UID: \"f2e47bd9-5a00-47cd-9b2e-5b80244c04a1\") " pod="kube-system/storage-provisioner"
	Nov 15 10:34:13 old-k8s-version-087235 kubelet[1514]: I1115 10:34:13.420947    1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f9b5c9c2-d642-4a22-890d-89a8f91f771b-config-volume\") pod \"coredns-5dd5756b68-bdpfv\" (UID: \"f9b5c9c2-d642-4a22-890d-89a8f91f771b\") " pod="kube-system/coredns-5dd5756b68-bdpfv"
	Nov 15 10:34:13 old-k8s-version-087235 kubelet[1514]: W1115 10:34:13.848929    1514 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/3d4715b4872d2f4603f13f0191a01c4322a26432e1316c2ae62b1c571bea1814/crio-03b0ed353ec0b0f226e14f86b1c883d5074ccc6e2dfe17831550ea274984f656 WatchSource:0}: Error finding container 03b0ed353ec0b0f226e14f86b1c883d5074ccc6e2dfe17831550ea274984f656: Status 404 returned error can't find the container with id 03b0ed353ec0b0f226e14f86b1c883d5074ccc6e2dfe17831550ea274984f656
	Nov 15 10:34:13 old-k8s-version-087235 kubelet[1514]: W1115 10:34:13.861415    1514 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/3d4715b4872d2f4603f13f0191a01c4322a26432e1316c2ae62b1c571bea1814/crio-d8cea75e3258833d929f9316af2b0552427460de7a72f9e76837d6b95d238653 WatchSource:0}: Error finding container d8cea75e3258833d929f9316af2b0552427460de7a72f9e76837d6b95d238653: Status 404 returned error can't find the container with id d8cea75e3258833d929f9316af2b0552427460de7a72f9e76837d6b95d238653
	Nov 15 10:34:14 old-k8s-version-087235 kubelet[1514]: I1115 10:34:14.091521    1514 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-bdpfv" podStartSLOduration=16.091473124 podCreationTimestamp="2025-11-15 10:33:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:34:14.091301012 +0000 UTC m=+29.314984663" watchObservedRunningTime="2025-11-15 10:34:14.091473124 +0000 UTC m=+29.315156776"
	Nov 15 10:34:14 old-k8s-version-087235 kubelet[1514]: I1115 10:34:14.110068    1514 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.110020709 podCreationTimestamp="2025-11-15 10:33:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:34:14.109819227 +0000 UTC m=+29.333502880" watchObservedRunningTime="2025-11-15 10:34:14.110020709 +0000 UTC m=+29.333704405"
	Nov 15 10:34:16 old-k8s-version-087235 kubelet[1514]: I1115 10:34:16.190358    1514 topology_manager.go:215] "Topology Admit Handler" podUID="99afc046-339f-4b7b-a19f-e6b0a2bbf831" podNamespace="default" podName="busybox"
	Nov 15 10:34:16 old-k8s-version-087235 kubelet[1514]: I1115 10:34:16.239258    1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr58w\" (UniqueName: \"kubernetes.io/projected/99afc046-339f-4b7b-a19f-e6b0a2bbf831-kube-api-access-rr58w\") pod \"busybox\" (UID: \"99afc046-339f-4b7b-a19f-e6b0a2bbf831\") " pod="default/busybox"
	Nov 15 10:34:16 old-k8s-version-087235 kubelet[1514]: W1115 10:34:16.513092    1514 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/3d4715b4872d2f4603f13f0191a01c4322a26432e1316c2ae62b1c571bea1814/crio-437d8c9aec1d1fd6c882b2d61d30834cbc61a33d386ab24213ae493f22325814 WatchSource:0}: Error finding container 437d8c9aec1d1fd6c882b2d61d30834cbc61a33d386ab24213ae493f22325814: Status 404 returned error can't find the container with id 437d8c9aec1d1fd6c882b2d61d30834cbc61a33d386ab24213ae493f22325814
	
	
	==> storage-provisioner [f3ebd840df618fdb6cb7fa2b9b223c78594af865466d15c86e29bedbf891823d] <==
	I1115 10:34:13.893523       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 10:34:13.937013       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 10:34:13.937363       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1115 10:34:13.952180       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 10:34:13.952327       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-087235_cdf3efb7-f6b2-4a4a-b64d-116d867d39df!
	I1115 10:34:13.952264       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c51d799d-ecee-4db4-97cb-68755d563c6e", APIVersion:"v1", ResourceVersion:"432", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-087235_cdf3efb7-f6b2-4a4a-b64d-116d867d39df became leader
	I1115 10:34:14.052936       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-087235_cdf3efb7-f6b2-4a4a-b64d-116d867d39df!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-087235 -n old-k8s-version-087235
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-087235 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-283677 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-283677 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (301.173491ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:34:41Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-283677 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-283677 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-283677 describe deploy/metrics-server -n kube-system: exit status 1 (74.137183ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-283677 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-283677
helpers_test.go:243: (dbg) docker inspect no-preload-283677:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5be6667f097090011551b6e80bf165ef8b8393ba894fdd8185eba7e40ac44832",
	        "Created": "2025-11-15T10:33:34.248576658Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 345647,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:33:34.285344622Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/5be6667f097090011551b6e80bf165ef8b8393ba894fdd8185eba7e40ac44832/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5be6667f097090011551b6e80bf165ef8b8393ba894fdd8185eba7e40ac44832/hostname",
	        "HostsPath": "/var/lib/docker/containers/5be6667f097090011551b6e80bf165ef8b8393ba894fdd8185eba7e40ac44832/hosts",
	        "LogPath": "/var/lib/docker/containers/5be6667f097090011551b6e80bf165ef8b8393ba894fdd8185eba7e40ac44832/5be6667f097090011551b6e80bf165ef8b8393ba894fdd8185eba7e40ac44832-json.log",
	        "Name": "/no-preload-283677",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-283677:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-283677",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5be6667f097090011551b6e80bf165ef8b8393ba894fdd8185eba7e40ac44832",
	                "LowerDir": "/var/lib/docker/overlay2/45ca12b29ace88526ba1031080c77398d7e970e8d9cdc89fe714afcbb97fdb8d-init/diff:/var/lib/docker/overlay2/507c85b6fb43d9c98216fac79d7a8d08dd20b2a63d0fbfc758336e5d38c04044/diff",
	                "MergedDir": "/var/lib/docker/overlay2/45ca12b29ace88526ba1031080c77398d7e970e8d9cdc89fe714afcbb97fdb8d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/45ca12b29ace88526ba1031080c77398d7e970e8d9cdc89fe714afcbb97fdb8d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/45ca12b29ace88526ba1031080c77398d7e970e8d9cdc89fe714afcbb97fdb8d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-283677",
	                "Source": "/var/lib/docker/volumes/no-preload-283677/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-283677",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-283677",
	                "name.minikube.sigs.k8s.io": "no-preload-283677",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c7a5ca0a845fc1290ccadb0a5c9032b2649ce570f179ad0bb00aad4d0e71c343",
	            "SandboxKey": "/var/run/docker/netns/c7a5ca0a845f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-283677": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "31f43b80693175788eae574d1283c9772486f60a6f30b977a4f67f74c18220c7",
	                    "EndpointID": "51f4fb2e33b0d209f5367e7669264685a84d21167bfc8d5798cc2072b922dd2d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "02:e9:75:12:60:3c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-283677",
	                        "5be6667f0970"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-283677 -n no-preload-283677
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-283677 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-283677 logs -n 25: (1.215063366s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p flannel-931243 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                             │ flannel-931243         │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p flannel-931243 sudo cat /etc/containerd/config.toml                                                                                                                                                                                        │ flannel-931243         │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p flannel-931243 sudo containerd config dump                                                                                                                                                                                                 │ flannel-931243         │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p flannel-931243 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                          │ flannel-931243         │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ stop    │ -p old-k8s-version-087235 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-087235 │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p flannel-931243 sudo systemctl cat crio --no-pager                                                                                                                                                                                          │ flannel-931243         │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p flannel-931243 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                │ flannel-931243         │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p flannel-931243 sudo crio config                                                                                                                                                                                                            │ flannel-931243         │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ delete  │ -p flannel-931243                                                                                                                                                                                                                             │ flannel-931243         │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ start   │ -p embed-certs-719574 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-719574     │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ ssh     │ -p bridge-931243 sudo cat /etc/nsswitch.conf                                                                                                                                                                                                  │ bridge-931243          │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo cat /etc/hosts                                                                                                                                                                                                          │ bridge-931243          │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo cat /etc/resolv.conf                                                                                                                                                                                                    │ bridge-931243          │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo crictl pods                                                                                                                                                                                                             │ bridge-931243          │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo crictl ps --all                                                                                                                                                                                                         │ bridge-931243          │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                  │ bridge-931243          │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo ip a s                                                                                                                                                                                                                  │ bridge-931243          │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo ip r s                                                                                                                                                                                                                  │ bridge-931243          │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo iptables-save                                                                                                                                                                                                           │ bridge-931243          │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo iptables -t nat -L -n -v                                                                                                                                                                                                │ bridge-931243          │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ addons  │ enable metrics-server -p no-preload-283677 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-283677      │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ ssh     │ -p bridge-931243 sudo systemctl status kubelet --all --full --no-pager                                                                                                                                                                        │ bridge-931243          │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-087235 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-087235 │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo systemctl cat kubelet --no-pager                                                                                                                                                                                        │ bridge-931243          │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ start   │ -p old-k8s-version-087235 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-087235 │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:34:42
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:34:42.275850  361423 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:34:42.275979  361423 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:34:42.275987  361423 out.go:374] Setting ErrFile to fd 2...
	I1115 10:34:42.275994  361423 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:34:42.276220  361423 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 10:34:42.276722  361423 out.go:368] Setting JSON to false
	I1115 10:34:42.278336  361423 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8219,"bootTime":1763194663,"procs":290,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 10:34:42.278445  361423 start.go:143] virtualization: kvm guest
	I1115 10:34:42.280523  361423 out.go:179] * [old-k8s-version-087235] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 10:34:42.281854  361423 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 10:34:42.281853  361423 notify.go:221] Checking for updates...
	I1115 10:34:42.284179  361423 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:34:42.285334  361423 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:34:42.286543  361423 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-55448/.minikube
	I1115 10:34:42.287673  361423 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 10:34:42.293407  361423 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:34:42.295393  361423 config.go:182] Loaded profile config "old-k8s-version-087235": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1115 10:34:42.297337  361423 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1115 10:34:42.298800  361423 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:34:42.327895  361423 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 10:34:42.328067  361423 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:34:42.394656  361423 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:76 SystemTime:2025-11-15 10:34:42.383728145 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:34:42.394760  361423 docker.go:319] overlay module found
	I1115 10:34:42.396552  361423 out.go:179] * Using the docker driver based on existing profile
	
	
	==> CRI-O <==
	Nov 15 10:34:28 no-preload-283677 crio[886]: time="2025-11-15T10:34:28.813410518Z" level=info msg="Created container bdef39b34937483917a5c238fb82b1bddb76a6e0edabaeb0aa59e71c927529d4: kube-system/coredns-66bc5c9577-66nkj/coredns" id=4c2bafb9-5dfb-4e18-87bd-28e7ff01638b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:34:28 no-preload-283677 crio[886]: time="2025-11-15T10:34:28.814159805Z" level=info msg="Starting container: bdef39b34937483917a5c238fb82b1bddb76a6e0edabaeb0aa59e71c927529d4" id=e2c38878-e990-4713-8c61-c36dc24f7b21 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:34:28 no-preload-283677 crio[886]: time="2025-11-15T10:34:28.816228782Z" level=info msg="Started container" PID=3019 containerID=bdef39b34937483917a5c238fb82b1bddb76a6e0edabaeb0aa59e71c927529d4 description=kube-system/coredns-66bc5c9577-66nkj/coredns id=e2c38878-e990-4713-8c61-c36dc24f7b21 name=/runtime.v1.RuntimeService/StartContainer sandboxID=33e8dcf5869dfc183fa41bddcd16b5cbe480c42ce089a7415c89fa8847a05abe
	Nov 15 10:34:31 no-preload-283677 crio[886]: time="2025-11-15T10:34:31.794440742Z" level=info msg="Running pod sandbox: default/busybox/POD" id=857885d1-76e9-4def-abaf-965cbbd345e1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:34:31 no-preload-283677 crio[886]: time="2025-11-15T10:34:31.794587441Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:34:31 no-preload-283677 crio[886]: time="2025-11-15T10:34:31.799635075Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:22e49771482c72538b94707ce8eee16b142c95b11745c640be6f29e4b63273e5 UID:ddb2a962-6824-4e90-abdf-1404de5921dc NetNS:/var/run/netns/7a43e5ab-8393-433a-9c7f-3047822f5b29 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000540d28}] Aliases:map[]}"
	Nov 15 10:34:31 no-preload-283677 crio[886]: time="2025-11-15T10:34:31.799672502Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 15 10:34:31 no-preload-283677 crio[886]: time="2025-11-15T10:34:31.813169107Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:22e49771482c72538b94707ce8eee16b142c95b11745c640be6f29e4b63273e5 UID:ddb2a962-6824-4e90-abdf-1404de5921dc NetNS:/var/run/netns/7a43e5ab-8393-433a-9c7f-3047822f5b29 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000540d28}] Aliases:map[]}"
	Nov 15 10:34:31 no-preload-283677 crio[886]: time="2025-11-15T10:34:31.813386958Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 15 10:34:31 no-preload-283677 crio[886]: time="2025-11-15T10:34:31.827502472Z" level=info msg="Ran pod sandbox 22e49771482c72538b94707ce8eee16b142c95b11745c640be6f29e4b63273e5 with infra container: default/busybox/POD" id=857885d1-76e9-4def-abaf-965cbbd345e1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:34:31 no-preload-283677 crio[886]: time="2025-11-15T10:34:31.828722233Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f42d0a9b-238d-4b25-9945-3b64f6048fb9 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:34:31 no-preload-283677 crio[886]: time="2025-11-15T10:34:31.828879176Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=f42d0a9b-238d-4b25-9945-3b64f6048fb9 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:34:31 no-preload-283677 crio[886]: time="2025-11-15T10:34:31.828930404Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=f42d0a9b-238d-4b25-9945-3b64f6048fb9 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:34:31 no-preload-283677 crio[886]: time="2025-11-15T10:34:31.82985347Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=bc1bca6d-72dd-4296-b4f5-a71be07dfccf name=/runtime.v1.ImageService/PullImage
	Nov 15 10:34:31 no-preload-283677 crio[886]: time="2025-11-15T10:34:31.831350315Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 15 10:34:35 no-preload-283677 crio[886]: time="2025-11-15T10:34:35.964804825Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=bc1bca6d-72dd-4296-b4f5-a71be07dfccf name=/runtime.v1.ImageService/PullImage
	Nov 15 10:34:35 no-preload-283677 crio[886]: time="2025-11-15T10:34:35.965433671Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=394028f8-f65b-4771-bfcc-ce8da75d687c name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:34:35 no-preload-283677 crio[886]: time="2025-11-15T10:34:35.966658436Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=96d5f901-fb8f-4153-b1c3-440ddc93a584 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:34:35 no-preload-283677 crio[886]: time="2025-11-15T10:34:35.970161452Z" level=info msg="Creating container: default/busybox/busybox" id=5d58d6c1-7de3-4854-a872-b177ddab78dd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:34:35 no-preload-283677 crio[886]: time="2025-11-15T10:34:35.970306867Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:34:35 no-preload-283677 crio[886]: time="2025-11-15T10:34:35.974371169Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:34:35 no-preload-283677 crio[886]: time="2025-11-15T10:34:35.974786909Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:34:36 no-preload-283677 crio[886]: time="2025-11-15T10:34:36.002979709Z" level=info msg="Created container 75f7a5f6a63bb49813824e854f54895f3f994b7c8163aa62d5434507c1c9c995: default/busybox/busybox" id=5d58d6c1-7de3-4854-a872-b177ddab78dd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:34:36 no-preload-283677 crio[886]: time="2025-11-15T10:34:36.003707478Z" level=info msg="Starting container: 75f7a5f6a63bb49813824e854f54895f3f994b7c8163aa62d5434507c1c9c995" id=d05cc724-f0ce-4179-a00e-b2a1294bc376 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:34:36 no-preload-283677 crio[886]: time="2025-11-15T10:34:36.005972709Z" level=info msg="Started container" PID=3095 containerID=75f7a5f6a63bb49813824e854f54895f3f994b7c8163aa62d5434507c1c9c995 description=default/busybox/busybox id=d05cc724-f0ce-4179-a00e-b2a1294bc376 name=/runtime.v1.RuntimeService/StartContainer sandboxID=22e49771482c72538b94707ce8eee16b142c95b11745c640be6f29e4b63273e5
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	75f7a5f6a63bb       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   22e49771482c7       busybox                                     default
	bdef39b349374       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      14 seconds ago      Running             coredns                   0                   33e8dcf5869df       coredns-66bc5c9577-66nkj                    kube-system
	891b360479618       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      14 seconds ago      Running             storage-provisioner       0                   aeb89cc1676cc       storage-provisioner                         kube-system
	3e412a0a48376       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    25 seconds ago      Running             kindnet-cni               0                   ad720e2d780cd       kindnet-x5rwg                               kube-system
	905d87d23a0de       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      29 seconds ago      Running             kube-proxy                0                   e0e1a690bd2d1       kube-proxy-vjbxg                            kube-system
	a84a135626990       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      40 seconds ago      Running             kube-apiserver            0                   1010e8871ef0e       kube-apiserver-no-preload-283677            kube-system
	bb89b2ad0ee62       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      40 seconds ago      Running             kube-controller-manager   0                   82350143c73b1       kube-controller-manager-no-preload-283677   kube-system
	ffe3c98f14c8e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      40 seconds ago      Running             etcd                      0                   4d4d28646f412       etcd-no-preload-283677                      kube-system
	c9048a7b80200       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      40 seconds ago      Running             kube-scheduler            0                   e41c21f2a1980       kube-scheduler-no-preload-283677            kube-system
	
	
	==> coredns [bdef39b34937483917a5c238fb82b1bddb76a6e0edabaeb0aa59e71c927529d4] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37246 - 61770 "HINFO IN 7854520599773956333.8296117342990286873. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.017226949s
	
	
	==> describe nodes <==
	Name:               no-preload-283677
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-283677
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=no-preload-283677
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_34_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:34:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-283677
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:34:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:34:38 +0000   Sat, 15 Nov 2025 10:34:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:34:38 +0000   Sat, 15 Nov 2025 10:34:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:34:38 +0000   Sat, 15 Nov 2025 10:34:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:34:38 +0000   Sat, 15 Nov 2025 10:34:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-283677
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                24a4b1bc-3dc5-430d-9221-78b09868633f
	  Boot ID:                    6a33f504-3a8c-4d49-8fc5-301cf5275efc
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-66nkj                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     30s
	  kube-system                 etcd-no-preload-283677                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         35s
	  kube-system                 kindnet-x5rwg                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-no-preload-283677             250m (3%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-no-preload-283677    200m (2%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-vjbxg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-no-preload-283677             100m (1%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 28s   kube-proxy       
	  Normal   Starting                 36s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 36s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  35s   kubelet          Node no-preload-283677 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    35s   kubelet          Node no-preload-283677 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     35s   kubelet          Node no-preload-283677 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           31s   node-controller  Node no-preload-283677 event: Registered Node no-preload-283677 in Controller
	  Normal   NodeReady                15s   kubelet          Node no-preload-283677 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 7f 53 f9 15 93 08 06
	[  +0.017821] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff c6 66 25 e9 45 28 08 06
	[ +19.178294] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 a3 65 b9 28 15 08 06
	[  +0.347929] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 6d df 33 a5 ad 08 06
	[  +0.000319] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff a2 7f 53 f9 15 93 08 06
	[Nov15 10:33] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 96 ed 10 e8 9a 80 08 06
	[  +0.000481] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 a3 65 b9 28 15 08 06
	[ +35.214217] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 c2 9a 56 52 0c 08 06
	[  +9.104720] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 c2 c4 d1 7c dd 08 06
	[Nov15 10:34] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 68 1e 80 53 05 08 06
	[  +0.000444] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 c2 c4 d1 7c dd 08 06
	[ +18.836046] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 98 db 78 4f 0b 08 06
	[  +0.000708] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 c2 9a 56 52 0c 08 06
	
	
	==> etcd [ffe3c98f14c8ed80dbd8a45d5be3fe3f54184e9d0e4253860b31d65f356b7856] <==
	{"level":"warn","ts":"2025-11-15T10:34:04.531868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:04.538750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:04.549058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:04.556382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:04.570852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:04.582117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:04.625436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:04.631232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:04.637238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:04.647144Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:04.653652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:04.659618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:04.666059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:04.675095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:04.684420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:04.725527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:04.732542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:04.739493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:04.746130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:04.752286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:04.758260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:04.774331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:04.783991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:04.795486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:04.873335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48098","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:34:43 up  2:16,  0 user,  load average: 5.40, 4.54, 2.67
	Linux no-preload-283677 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3e412a0a483766b2430dd60b481bc6e23eb54b53e53596290e0f06e962392a36] <==
	I1115 10:34:17.584547       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:34:17.675678       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1115 10:34:17.675841       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:34:17.675862       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:34:17.675898       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:34:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:34:17.878983       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:34:17.879049       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:34:17.879061       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:34:17.879200       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1115 10:34:18.179291       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:34:18.179321       1 metrics.go:72] Registering metrics
	I1115 10:34:18.179383       1 controller.go:711] "Syncing nftables rules"
	I1115 10:34:27.885051       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 10:34:27.885104       1 main.go:301] handling current node
	I1115 10:34:37.879448       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 10:34:37.879519       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a84a135626990a2dd012615eb2640336205d28f70e1dd69951d0348f42a90918] <==
	I1115 10:34:05.719803       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 10:34:05.720064       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1115 10:34:05.728266       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:34:05.729143       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	E1115 10:34:05.729738       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1115 10:34:05.933768       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 10:34:06.557927       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1115 10:34:06.562919       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1115 10:34:06.562937       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:34:07.065027       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:34:07.101579       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:34:07.226883       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1115 10:34:07.232763       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1115 10:34:07.233767       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:34:07.238143       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 10:34:07.628746       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 10:34:08.178362       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 10:34:08.187867       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1115 10:34:08.196234       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1115 10:34:13.281848       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 10:34:13.330613       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1115 10:34:13.330614       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1115 10:34:13.683167       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:34:13.687444       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1115 10:34:41.594995       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:47132: use of closed network connection
	
	
	==> kube-controller-manager [bb89b2ad0ee629ba83188cb255af5683da90115068f9a6f55b1ab0deef2afe68] <==
	I1115 10:34:12.626067       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 10:34:12.626542       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1115 10:34:12.626857       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-283677" podCIDRs=["10.244.0.0/24"]
	I1115 10:34:12.627007       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1115 10:34:12.627350       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 10:34:12.628853       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 10:34:12.628879       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1115 10:34:12.628891       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1115 10:34:12.628905       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1115 10:34:12.628980       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1115 10:34:12.628993       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1115 10:34:12.629016       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1115 10:34:12.629169       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 10:34:12.629398       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1115 10:34:12.629678       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1115 10:34:12.630335       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 10:34:12.631538       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1115 10:34:12.632742       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1115 10:34:12.635992       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:34:12.640200       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:34:12.640216       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 10:34:12.640223       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1115 10:34:12.646329       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 10:34:12.648562       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:34:32.622228       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [905d87d23a0ded36a5b37ebbcb28297dd63171574dffb000302fae4c0ef53772] <==
	I1115 10:34:13.769196       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:34:14.035695       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:34:14.136079       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:34:14.136114       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1115 10:34:14.136217       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:34:14.240474       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:34:14.240543       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:34:14.248473       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:34:14.248879       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:34:14.249146       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:34:14.321750       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:34:14.322073       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:34:14.322179       1 config.go:200] "Starting service config controller"
	I1115 10:34:14.322213       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:34:14.322378       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:34:14.322652       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:34:14.323044       1 config.go:309] "Starting node config controller"
	I1115 10:34:14.323149       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:34:14.323161       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:34:14.422313       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 10:34:14.422324       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 10:34:14.422833       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [c9048a7b80200c7f4ce4f0633db74bec7b96a6cb73b7802c4fcefb10c9238604] <==
	E1115 10:34:05.651640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 10:34:05.651794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 10:34:05.651904       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 10:34:05.652024       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 10:34:05.652072       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 10:34:05.652130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 10:34:05.650945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 10:34:05.657232       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 10:34:05.657253       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 10:34:05.657576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 10:34:05.658013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 10:34:06.522926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 10:34:06.522935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 10:34:06.537272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 10:34:06.579637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 10:34:06.583595       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 10:34:06.629671       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 10:34:06.697684       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 10:34:06.775508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 10:34:06.866073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 10:34:06.882127       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 10:34:06.886077       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 10:34:06.893140       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 10:34:07.078833       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1115 10:34:09.343742       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 10:34:12 no-preload-283677 kubelet[2407]: I1115 10:34:12.722916    2407 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 15 10:34:13 no-preload-283677 kubelet[2407]: I1115 10:34:13.454553    2407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e504759b-46cd-4a41-a8cd-050722131a7d-cni-cfg\") pod \"kindnet-x5rwg\" (UID: \"e504759b-46cd-4a41-a8cd-050722131a7d\") " pod="kube-system/kindnet-x5rwg"
	Nov 15 10:34:13 no-preload-283677 kubelet[2407]: I1115 10:34:13.454619    2407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e504759b-46cd-4a41-a8cd-050722131a7d-lib-modules\") pod \"kindnet-x5rwg\" (UID: \"e504759b-46cd-4a41-a8cd-050722131a7d\") " pod="kube-system/kindnet-x5rwg"
	Nov 15 10:34:13 no-preload-283677 kubelet[2407]: I1115 10:34:13.454641    2407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e504759b-46cd-4a41-a8cd-050722131a7d-xtables-lock\") pod \"kindnet-x5rwg\" (UID: \"e504759b-46cd-4a41-a8cd-050722131a7d\") " pod="kube-system/kindnet-x5rwg"
	Nov 15 10:34:13 no-preload-283677 kubelet[2407]: I1115 10:34:13.454657    2407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/68dffa75-569b-42ef-b4b2-c02a9c1938e7-kube-proxy\") pod \"kube-proxy-vjbxg\" (UID: \"68dffa75-569b-42ef-b4b2-c02a9c1938e7\") " pod="kube-system/kube-proxy-vjbxg"
	Nov 15 10:34:13 no-preload-283677 kubelet[2407]: I1115 10:34:13.454672    2407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68dffa75-569b-42ef-b4b2-c02a9c1938e7-lib-modules\") pod \"kube-proxy-vjbxg\" (UID: \"68dffa75-569b-42ef-b4b2-c02a9c1938e7\") " pod="kube-system/kube-proxy-vjbxg"
	Nov 15 10:34:13 no-preload-283677 kubelet[2407]: I1115 10:34:13.454740    2407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cwbk\" (UniqueName: \"kubernetes.io/projected/68dffa75-569b-42ef-b4b2-c02a9c1938e7-kube-api-access-8cwbk\") pod \"kube-proxy-vjbxg\" (UID: \"68dffa75-569b-42ef-b4b2-c02a9c1938e7\") " pod="kube-system/kube-proxy-vjbxg"
	Nov 15 10:34:13 no-preload-283677 kubelet[2407]: I1115 10:34:13.454806    2407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5qtq\" (UniqueName: \"kubernetes.io/projected/e504759b-46cd-4a41-a8cd-050722131a7d-kube-api-access-m5qtq\") pod \"kindnet-x5rwg\" (UID: \"e504759b-46cd-4a41-a8cd-050722131a7d\") " pod="kube-system/kindnet-x5rwg"
	Nov 15 10:34:13 no-preload-283677 kubelet[2407]: I1115 10:34:13.454834    2407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68dffa75-569b-42ef-b4b2-c02a9c1938e7-xtables-lock\") pod \"kube-proxy-vjbxg\" (UID: \"68dffa75-569b-42ef-b4b2-c02a9c1938e7\") " pod="kube-system/kube-proxy-vjbxg"
	Nov 15 10:34:13 no-preload-283677 kubelet[2407]: W1115 10:34:13.663798    2407 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5be6667f097090011551b6e80bf165ef8b8393ba894fdd8185eba7e40ac44832/crio-ad720e2d780cdb5a0246d2fabdac5b17ca09ede5ee87667e8923aa2958718b34 WatchSource:0}: Error finding container ad720e2d780cdb5a0246d2fabdac5b17ca09ede5ee87667e8923aa2958718b34: Status 404 returned error can't find the container with id ad720e2d780cdb5a0246d2fabdac5b17ca09ede5ee87667e8923aa2958718b34
	Nov 15 10:34:13 no-preload-283677 kubelet[2407]: W1115 10:34:13.664158    2407 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5be6667f097090011551b6e80bf165ef8b8393ba894fdd8185eba7e40ac44832/crio-e0e1a690bd2d17f658d97b71c2cc8bf7fa89dd819f85ae409735d063d637a8c5 WatchSource:0}: Error finding container e0e1a690bd2d17f658d97b71c2cc8bf7fa89dd819f85ae409735d063d637a8c5: Status 404 returned error can't find the container with id e0e1a690bd2d17f658d97b71c2cc8bf7fa89dd819f85ae409735d063d637a8c5
	Nov 15 10:34:14 no-preload-283677 kubelet[2407]: I1115 10:34:14.141314    2407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vjbxg" podStartSLOduration=1.141294634 podStartE2EDuration="1.141294634s" podCreationTimestamp="2025-11-15 10:34:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:34:14.139610883 +0000 UTC m=+6.221621505" watchObservedRunningTime="2025-11-15 10:34:14.141294634 +0000 UTC m=+6.223305235"
	Nov 15 10:34:18 no-preload-283677 kubelet[2407]: I1115 10:34:18.147360    2407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-x5rwg" podStartSLOduration=1.459173456 podStartE2EDuration="5.147339205s" podCreationTimestamp="2025-11-15 10:34:13 +0000 UTC" firstStartedPulling="2025-11-15 10:34:13.666474421 +0000 UTC m=+5.748485002" lastFinishedPulling="2025-11-15 10:34:17.354640174 +0000 UTC m=+9.436650751" observedRunningTime="2025-11-15 10:34:18.147041693 +0000 UTC m=+10.229052322" watchObservedRunningTime="2025-11-15 10:34:18.147339205 +0000 UTC m=+10.229349806"
	Nov 15 10:34:28 no-preload-283677 kubelet[2407]: I1115 10:34:28.406535    2407 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 15 10:34:28 no-preload-283677 kubelet[2407]: I1115 10:34:28.616867    2407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-278sg\" (UniqueName: \"kubernetes.io/projected/24222831-4bc3-4c24-87ba-fd523a1e0c85-kube-api-access-278sg\") pod \"storage-provisioner\" (UID: \"24222831-4bc3-4c24-87ba-fd523a1e0c85\") " pod="kube-system/storage-provisioner"
	Nov 15 10:34:28 no-preload-283677 kubelet[2407]: I1115 10:34:28.616919    2407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pdl7\" (UniqueName: \"kubernetes.io/projected/077957ec-b312-4412-a6b1-ae36eb2e7e16-kube-api-access-5pdl7\") pod \"coredns-66bc5c9577-66nkj\" (UID: \"077957ec-b312-4412-a6b1-ae36eb2e7e16\") " pod="kube-system/coredns-66bc5c9577-66nkj"
	Nov 15 10:34:28 no-preload-283677 kubelet[2407]: I1115 10:34:28.616983    2407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/24222831-4bc3-4c24-87ba-fd523a1e0c85-tmp\") pod \"storage-provisioner\" (UID: \"24222831-4bc3-4c24-87ba-fd523a1e0c85\") " pod="kube-system/storage-provisioner"
	Nov 15 10:34:28 no-preload-283677 kubelet[2407]: I1115 10:34:28.617013    2407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/077957ec-b312-4412-a6b1-ae36eb2e7e16-config-volume\") pod \"coredns-66bc5c9577-66nkj\" (UID: \"077957ec-b312-4412-a6b1-ae36eb2e7e16\") " pod="kube-system/coredns-66bc5c9577-66nkj"
	Nov 15 10:34:28 no-preload-283677 kubelet[2407]: W1115 10:34:28.759669    2407 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5be6667f097090011551b6e80bf165ef8b8393ba894fdd8185eba7e40ac44832/crio-aeb89cc1676cc6c15a6720ec2b0b756d6e62659fe9cca382de67249ee27ea26b WatchSource:0}: Error finding container aeb89cc1676cc6c15a6720ec2b0b756d6e62659fe9cca382de67249ee27ea26b: Status 404 returned error can't find the container with id aeb89cc1676cc6c15a6720ec2b0b756d6e62659fe9cca382de67249ee27ea26b
	Nov 15 10:34:28 no-preload-283677 kubelet[2407]: W1115 10:34:28.783005    2407 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5be6667f097090011551b6e80bf165ef8b8393ba894fdd8185eba7e40ac44832/crio-33e8dcf5869dfc183fa41bddcd16b5cbe480c42ce089a7415c89fa8847a05abe WatchSource:0}: Error finding container 33e8dcf5869dfc183fa41bddcd16b5cbe480c42ce089a7415c89fa8847a05abe: Status 404 returned error can't find the container with id 33e8dcf5869dfc183fa41bddcd16b5cbe480c42ce089a7415c89fa8847a05abe
	Nov 15 10:34:29 no-preload-283677 kubelet[2407]: I1115 10:34:29.178137    2407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-66nkj" podStartSLOduration=16.17811539 podStartE2EDuration="16.17811539s" podCreationTimestamp="2025-11-15 10:34:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:34:29.178061862 +0000 UTC m=+21.260072484" watchObservedRunningTime="2025-11-15 10:34:29.17811539 +0000 UTC m=+21.260125988"
	Nov 15 10:34:29 no-preload-283677 kubelet[2407]: I1115 10:34:29.198377    2407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.198348668 podStartE2EDuration="15.198348668s" podCreationTimestamp="2025-11-15 10:34:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:34:29.187453083 +0000 UTC m=+21.269463681" watchObservedRunningTime="2025-11-15 10:34:29.198348668 +0000 UTC m=+21.280359320"
	Nov 15 10:34:31 no-preload-283677 kubelet[2407]: I1115 10:34:31.538601    2407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v4vk\" (UniqueName: \"kubernetes.io/projected/ddb2a962-6824-4e90-abdf-1404de5921dc-kube-api-access-2v4vk\") pod \"busybox\" (UID: \"ddb2a962-6824-4e90-abdf-1404de5921dc\") " pod="default/busybox"
	Nov 15 10:34:31 no-preload-283677 kubelet[2407]: W1115 10:34:31.826226    2407 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5be6667f097090011551b6e80bf165ef8b8393ba894fdd8185eba7e40ac44832/crio-22e49771482c72538b94707ce8eee16b142c95b11745c640be6f29e4b63273e5 WatchSource:0}: Error finding container 22e49771482c72538b94707ce8eee16b142c95b11745c640be6f29e4b63273e5: Status 404 returned error can't find the container with id 22e49771482c72538b94707ce8eee16b142c95b11745c640be6f29e4b63273e5
	Nov 15 10:34:36 no-preload-283677 kubelet[2407]: I1115 10:34:36.198558    2407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.061700243 podStartE2EDuration="5.198541686s" podCreationTimestamp="2025-11-15 10:34:31 +0000 UTC" firstStartedPulling="2025-11-15 10:34:31.829359503 +0000 UTC m=+23.911370080" lastFinishedPulling="2025-11-15 10:34:35.966200946 +0000 UTC m=+28.048211523" observedRunningTime="2025-11-15 10:34:36.198226176 +0000 UTC m=+28.280236775" watchObservedRunningTime="2025-11-15 10:34:36.198541686 +0000 UTC m=+28.280552283"
	
	
	==> storage-provisioner [891b360479618c6b2d12a8e002542ebd9f271275fd569aab5053ebe3119b9c56] <==
	I1115 10:34:28.824726       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 10:34:28.834121       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 10:34:28.834175       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1115 10:34:28.836430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:28.842950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:34:28.843106       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 10:34:28.843274       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-283677_365958a6-3dbe-43f9-8514-f845c06be7cc!
	I1115 10:34:28.843588       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ca833280-d4c1-43fb-bae2-a3f123cb9113", APIVersion:"v1", ResourceVersion:"413", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-283677_365958a6-3dbe-43f9-8514-f845c06be7cc became leader
	W1115 10:34:28.845980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:28.854931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:34:28.943489       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-283677_365958a6-3dbe-43f9-8514-f845c06be7cc!
	W1115 10:34:30.857900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:30.862505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:32.865824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:32.869884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:34.872648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:34.880351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:36.884253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:36.926894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:38.930461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:38.939115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:40.942862       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:40.946864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:42.952725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:42.960383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-283677 -n no-preload-283677
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-283677 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-719574 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-719574 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (253.415989ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:35:35Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-719574 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-719574 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-719574 describe deploy/metrics-server -n kube-system: exit status 1 (61.008453ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-719574 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-719574
helpers_test.go:243: (dbg) docker inspect embed-certs-719574:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "77b854d73395a6882ad846c6f51d5c84a63e9af66dc48bfe4bdf582432e5fa0b",
	        "Created": "2025-11-15T10:34:39.190268884Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 359527,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:34:39.233019317Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/77b854d73395a6882ad846c6f51d5c84a63e9af66dc48bfe4bdf582432e5fa0b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/77b854d73395a6882ad846c6f51d5c84a63e9af66dc48bfe4bdf582432e5fa0b/hostname",
	        "HostsPath": "/var/lib/docker/containers/77b854d73395a6882ad846c6f51d5c84a63e9af66dc48bfe4bdf582432e5fa0b/hosts",
	        "LogPath": "/var/lib/docker/containers/77b854d73395a6882ad846c6f51d5c84a63e9af66dc48bfe4bdf582432e5fa0b/77b854d73395a6882ad846c6f51d5c84a63e9af66dc48bfe4bdf582432e5fa0b-json.log",
	        "Name": "/embed-certs-719574",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-719574:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-719574",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "77b854d73395a6882ad846c6f51d5c84a63e9af66dc48bfe4bdf582432e5fa0b",
	                "LowerDir": "/var/lib/docker/overlay2/d78394fc0232db571aad8803b55daf0d7b8a72cdb47dfbbeba9dc3f9e8f3bfaa-init/diff:/var/lib/docker/overlay2/507c85b6fb43d9c98216fac79d7a8d08dd20b2a63d0fbfc758336e5d38c04044/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d78394fc0232db571aad8803b55daf0d7b8a72cdb47dfbbeba9dc3f9e8f3bfaa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d78394fc0232db571aad8803b55daf0d7b8a72cdb47dfbbeba9dc3f9e8f3bfaa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d78394fc0232db571aad8803b55daf0d7b8a72cdb47dfbbeba9dc3f9e8f3bfaa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-719574",
	                "Source": "/var/lib/docker/volumes/embed-certs-719574/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-719574",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-719574",
	                "name.minikube.sigs.k8s.io": "embed-certs-719574",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "04213ef8e16c460934d83caaa8184f2d4d171530d105241b59ec692ab9bc886f",
	            "SandboxKey": "/var/run/docker/netns/04213ef8e16c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-719574": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5402d8c1e78ae31835e502183d61451b5187ae582db12fcffbcfeece1b73ea7c",
	                    "EndpointID": "f3781fc2246cde1a5d6e572b9bcc6b2e570bea524011ece642667ce344926187",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "16:45:06:e6:45:38",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-719574",
	                        "77b854d73395"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-719574 -n embed-certs-719574
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-719574 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-719574 logs -n 25: (1.064663412s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-931243 sudo systemctl status docker --all --full --no-pager                                                                                                    │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ ssh     │ -p bridge-931243 sudo systemctl cat docker --no-pager                                                                                                                    │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ stop    │ -p no-preload-283677 --alsologtostderr -v=3                                                                                                                              │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo cat /etc/docker/daemon.json                                                                                                                        │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ ssh     │ -p bridge-931243 sudo docker system info                                                                                                                                 │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ ssh     │ -p bridge-931243 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ ssh     │ -p bridge-931243 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ ssh     │ -p bridge-931243 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo cri-dockerd --version                                                                                                                              │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ ssh     │ -p bridge-931243 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo containerd config dump                                                                                                                             │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo crio config                                                                                                                                        │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ delete  │ -p bridge-931243                                                                                                                                                         │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ delete  │ -p disable-driver-mounts-435527                                                                                                                                          │ disable-driver-mounts-435527 │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ start   │ -p default-k8s-diff-port-026691 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-026691 │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-283677 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ start   │ -p no-preload-283677 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-719574 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:34:57
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:34:57.108674  368849 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:34:57.109040  368849 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:34:57.109051  368849 out.go:374] Setting ErrFile to fd 2...
	I1115 10:34:57.109058  368849 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:34:57.111080  368849 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 10:34:57.111766  368849 out.go:368] Setting JSON to false
	I1115 10:34:57.113998  368849 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8234,"bootTime":1763194663,"procs":296,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 10:34:57.114136  368849 start.go:143] virtualization: kvm guest
	I1115 10:34:57.115948  368849 out.go:179] * [no-preload-283677] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 10:34:57.117523  368849 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 10:34:57.117555  368849 notify.go:221] Checking for updates...
	I1115 10:34:57.119869  368849 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:34:57.121118  368849 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:34:57.122183  368849 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-55448/.minikube
	I1115 10:34:57.123828  368849 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 10:34:57.125045  368849 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:34:57.127033  368849 config.go:182] Loaded profile config "no-preload-283677": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:34:57.127935  368849 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:34:57.156939  368849 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 10:34:57.157094  368849 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:34:57.240931  368849 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:true NGoroutines:79 SystemTime:2025-11-15 10:34:57.228600984 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:34:57.241107  368849 docker.go:319] overlay module found
	I1115 10:34:57.243006  368849 out.go:179] * Using the docker driver based on existing profile
	I1115 10:34:56.682396  361423 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1115 10:34:56.682754  361423 addons.go:515] duration metric: took 6.415772773s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1115 10:34:56.684325  361423 api_server.go:141] control plane version: v1.28.0
	I1115 10:34:56.684354  361423 api_server.go:131] duration metric: took 8.788317ms to wait for apiserver health ...
	I1115 10:34:56.684364  361423 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:34:56.690921  361423 system_pods.go:59] 8 kube-system pods found
	I1115 10:34:56.691034  361423 system_pods.go:61] "coredns-5dd5756b68-bdpfv" [f9b5c9c2-d642-4a22-890d-89a8f91f771b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:34:56.691127  361423 system_pods.go:61] "etcd-old-k8s-version-087235" [ad6ea576-0a1a-4a8a-96c2-3076229520e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:34:56.691149  361423 system_pods.go:61] "kindnet-7btvm" [40ac7700-b07d-4504-8532-414d2fab7395] Running
	I1115 10:34:56.691158  361423 system_pods.go:61] "kube-apiserver-old-k8s-version-087235" [d035a8eb-e4b4-4eae-8726-c29fa6059168] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:34:56.691166  361423 system_pods.go:61] "kube-controller-manager-old-k8s-version-087235" [9cc04644-cddd-4b9c-964c-826ee56dbc47] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:34:56.691172  361423 system_pods.go:61] "kube-proxy-gl22j" [a854c189-3bd6-4c7d-8160-ae11b35db003] Running
	I1115 10:34:56.691179  361423 system_pods.go:61] "kube-scheduler-old-k8s-version-087235" [1da33e9b-b3a7-4e2b-b951-81bfc76bb515] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:34:56.691184  361423 system_pods.go:61] "storage-provisioner" [f2e47bd9-5a00-47cd-9b2e-5b80244c04a1] Running
	I1115 10:34:56.691199  361423 system_pods.go:74] duration metric: took 6.828122ms to wait for pod list to return data ...
	I1115 10:34:56.691207  361423 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:34:56.695797  361423 default_sa.go:45] found service account: "default"
	I1115 10:34:56.695993  361423 default_sa.go:55] duration metric: took 4.775405ms for default service account to be created ...
	I1115 10:34:56.696009  361423 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:34:56.706900  361423 system_pods.go:86] 8 kube-system pods found
	I1115 10:34:56.706946  361423 system_pods.go:89] "coredns-5dd5756b68-bdpfv" [f9b5c9c2-d642-4a22-890d-89a8f91f771b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:34:56.707061  361423 system_pods.go:89] "etcd-old-k8s-version-087235" [ad6ea576-0a1a-4a8a-96c2-3076229520e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:34:56.707075  361423 system_pods.go:89] "kindnet-7btvm" [40ac7700-b07d-4504-8532-414d2fab7395] Running
	I1115 10:34:56.707086  361423 system_pods.go:89] "kube-apiserver-old-k8s-version-087235" [d035a8eb-e4b4-4eae-8726-c29fa6059168] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:34:56.707148  361423 system_pods.go:89] "kube-controller-manager-old-k8s-version-087235" [9cc04644-cddd-4b9c-964c-826ee56dbc47] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:34:56.707168  361423 system_pods.go:89] "kube-proxy-gl22j" [a854c189-3bd6-4c7d-8160-ae11b35db003] Running
	I1115 10:34:56.707188  361423 system_pods.go:89] "kube-scheduler-old-k8s-version-087235" [1da33e9b-b3a7-4e2b-b951-81bfc76bb515] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:34:56.707217  361423 system_pods.go:89] "storage-provisioner" [f2e47bd9-5a00-47cd-9b2e-5b80244c04a1] Running
	I1115 10:34:56.707230  361423 system_pods.go:126] duration metric: took 11.211997ms to wait for k8s-apps to be running ...
	I1115 10:34:56.707238  361423 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:34:56.707321  361423 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:34:56.739287  361423 system_svc.go:56] duration metric: took 32.035692ms WaitForService to wait for kubelet
	I1115 10:34:56.739406  361423 kubeadm.go:587] duration metric: took 6.472459641s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:34:56.739438  361423 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:34:56.744554  361423 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:34:56.744591  361423 node_conditions.go:123] node cpu capacity is 8
	I1115 10:34:56.744610  361423 node_conditions.go:105] duration metric: took 5.164463ms to run NodePressure ...
	I1115 10:34:56.744623  361423 start.go:242] waiting for startup goroutines ...
	I1115 10:34:56.744633  361423 start.go:247] waiting for cluster config update ...
	I1115 10:34:56.744648  361423 start.go:256] writing updated cluster config ...
	I1115 10:34:56.744949  361423 ssh_runner.go:195] Run: rm -f paused
	I1115 10:34:56.752666  361423 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:34:56.758416  361423 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-bdpfv" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:57.244155  368849 start.go:309] selected driver: docker
	I1115 10:34:57.244180  368849 start.go:930] validating driver "docker" against &{Name:no-preload-283677 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-283677 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:34:57.244301  368849 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:34:57.245328  368849 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:34:57.321410  368849 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:true NGoroutines:79 SystemTime:2025-11-15 10:34:57.3090885 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be removed
by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:34:57.321759  368849 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:34:57.321796  368849 cni.go:84] Creating CNI manager for ""
	I1115 10:34:57.321849  368849 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:34:57.321897  368849 start.go:353] cluster config:
	{Name:no-preload-283677 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-283677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:34:57.324353  368849 out.go:179] * Starting "no-preload-283677" primary control-plane node in "no-preload-283677" cluster
	I1115 10:34:57.325413  368849 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:34:57.326593  368849 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:34:57.327877  368849 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:34:57.327926  368849 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:34:57.328103  368849 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/config.json ...
	I1115 10:34:57.328512  368849 cache.go:107] acquiring lock: {Name:mk04e19ef4726336e87a2efa989ec89b11194587 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:34:57.328600  368849 cache.go:115] /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1115 10:34:57.328611  368849 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 116.56µs
	I1115 10:34:57.328622  368849 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1115 10:34:57.328638  368849 cache.go:107] acquiring lock: {Name:mk160c40720b01bd77226b9ee86c8a56493b3987 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:34:57.328681  368849 cache.go:115] /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1115 10:34:57.328688  368849 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 53.964µs
	I1115 10:34:57.328696  368849 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1115 10:34:57.328709  368849 cache.go:107] acquiring lock: {Name:mk568a3320f172c7702e0c64f82e9ab66f08dc56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:34:57.328745  368849 cache.go:115] /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1115 10:34:57.328753  368849 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 45.66µs
	I1115 10:34:57.328760  368849 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1115 10:34:57.328772  368849 cache.go:107] acquiring lock: {Name:mk4538f0a5ff75ff8439835bfd59d64a365cd71b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:34:57.328806  368849 cache.go:115] /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1115 10:34:57.328812  368849 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 42.3µs
	I1115 10:34:57.328820  368849 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1115 10:34:57.328842  368849 cache.go:107] acquiring lock: {Name:mkebd0527ca8cd5425c0189738c4c613b1d0ad77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:34:57.328878  368849 cache.go:115] /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1115 10:34:57.328884  368849 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 55.883µs
	I1115 10:34:57.328893  368849 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1115 10:34:57.329374  368849 cache.go:107] acquiring lock: {Name:mk5c9d9d1f91519c0468e055d96da9be78d8987d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:34:57.329494  368849 cache.go:115] /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1115 10:34:57.329505  368849 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 157µs
	I1115 10:34:57.329514  368849 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1115 10:34:57.329533  368849 cache.go:107] acquiring lock: {Name:mk6d25d7926738a8037e85ed094d1b802d5c1f77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:34:57.329577  368849 cache.go:115] /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1115 10:34:57.329583  368849 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 53.182µs
	I1115 10:34:57.329591  368849 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1115 10:34:57.329625  368849 cache.go:107] acquiring lock: {Name:mkc6ed1fa15fd637355ac953d6d06e91f3f34a59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:34:57.329680  368849 cache.go:115] /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1115 10:34:57.329687  368849 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 65.791µs
	I1115 10:34:57.329700  368849 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1115 10:34:57.329724  368849 cache.go:87] Successfully saved all images to host disk.
	I1115 10:34:57.355013  368849 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:34:57.355036  368849 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:34:57.355056  368849 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:34:57.355084  368849 start.go:360] acquireMachinesLock for no-preload-283677: {Name:mk8d9dc816de84055c03b404ddcac096c332be5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:34:57.355145  368849 start.go:364] duration metric: took 42.843µs to acquireMachinesLock for "no-preload-283677"
	I1115 10:34:57.355165  368849 start.go:96] Skipping create...Using existing machine configuration
	I1115 10:34:57.355174  368849 fix.go:54] fixHost starting: 
	I1115 10:34:57.355455  368849 cli_runner.go:164] Run: docker container inspect no-preload-283677 --format={{.State.Status}}
	I1115 10:34:57.375065  368849 fix.go:112] recreateIfNeeded on no-preload-283677: state=Stopped err=<nil>
	W1115 10:34:57.375094  368849 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 10:34:52.640072  367608 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 10:34:52.641977  367608 start.go:159] libmachine.API.Create for "default-k8s-diff-port-026691" (driver="docker")
	I1115 10:34:52.642026  367608 client.go:173] LocalClient.Create starting
	I1115 10:34:52.642126  367608 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem
	I1115 10:34:52.642171  367608 main.go:143] libmachine: Decoding PEM data...
	I1115 10:34:52.642193  367608 main.go:143] libmachine: Parsing certificate...
	I1115 10:34:52.642275  367608 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem
	I1115 10:34:52.642302  367608 main.go:143] libmachine: Decoding PEM data...
	I1115 10:34:52.642316  367608 main.go:143] libmachine: Parsing certificate...
	I1115 10:34:52.642807  367608 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-026691 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 10:34:52.663735  367608 cli_runner.go:211] docker network inspect default-k8s-diff-port-026691 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 10:34:52.663801  367608 network_create.go:284] running [docker network inspect default-k8s-diff-port-026691] to gather additional debugging logs...
	I1115 10:34:52.663820  367608 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-026691
	W1115 10:34:52.681651  367608 cli_runner.go:211] docker network inspect default-k8s-diff-port-026691 returned with exit code 1
	I1115 10:34:52.681682  367608 network_create.go:287] error running [docker network inspect default-k8s-diff-port-026691]: docker network inspect default-k8s-diff-port-026691: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-026691 not found
	I1115 10:34:52.681694  367608 network_create.go:289] output of [docker network inspect default-k8s-diff-port-026691]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-026691 not found
	
	** /stderr **
	I1115 10:34:52.681815  367608 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:34:52.703576  367608 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-77644897380e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:42:f9:e2:83:c3:91} reservation:<nil>}
	I1115 10:34:52.704399  367608 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e808d03b2052 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:03:24:ef:92:a1} reservation:<nil>}
	I1115 10:34:52.705358  367608 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3fb45eeaa66e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:1a:b3:b7:0f:2e:99} reservation:<nil>}
	I1115 10:34:52.706067  367608 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-31f43b806931 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:0a:cc:8c:d8:0d:c5} reservation:<nil>}
	I1115 10:34:52.707182  367608 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ec6c60}
	I1115 10:34:52.707213  367608 network_create.go:124] attempt to create docker network default-k8s-diff-port-026691 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1115 10:34:52.707274  367608 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-026691 default-k8s-diff-port-026691
	I1115 10:34:52.763872  367608 network_create.go:108] docker network default-k8s-diff-port-026691 192.168.85.0/24 created
	I1115 10:34:52.763908  367608 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-026691" container
	I1115 10:34:52.764001  367608 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 10:34:52.794341  367608 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-026691 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-026691 --label created_by.minikube.sigs.k8s.io=true
	I1115 10:34:52.814745  367608 oci.go:103] Successfully created a docker volume default-k8s-diff-port-026691
	I1115 10:34:52.814828  367608 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-026691-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-026691 --entrypoint /usr/bin/test -v default-k8s-diff-port-026691:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 10:34:53.252498  367608 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-026691
	I1115 10:34:53.252579  367608 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:34:53.252594  367608 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 10:34:53.252663  367608 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-026691:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1115 10:34:56.654774  367608 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-026691:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.402061031s)
	I1115 10:34:56.654813  367608 kic.go:203] duration metric: took 3.402214691s to extract preloaded images to volume ...
	W1115 10:34:56.654990  367608 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1115 10:34:56.655155  367608 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 10:34:56.764857  367608 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-026691 --name default-k8s-diff-port-026691 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-026691 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-026691 --network default-k8s-diff-port-026691 --ip 192.168.85.2 --volume default-k8s-diff-port-026691:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 10:34:57.094021  367608 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Running}}
	I1115 10:34:57.121300  367608 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:34:57.147203  367608 cli_runner.go:164] Run: docker exec default-k8s-diff-port-026691 stat /var/lib/dpkg/alternatives/iptables
	I1115 10:34:57.208529  367608 oci.go:144] the created container "default-k8s-diff-port-026691" has a running status.
	I1115 10:34:57.208578  367608 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa...
	I1115 10:34:54.186226  358343 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.814123ms
	I1115 10:34:54.189071  358343 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 10:34:54.189208  358343 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1115 10:34:54.189338  358343 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 10:34:54.189440  358343 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1115 10:34:57.855035  367608 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 10:34:57.883874  367608 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:34:57.907435  367608 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 10:34:57.907455  367608 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-026691 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 10:34:57.965903  367608 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:34:57.988026  367608 machine.go:94] provisionDockerMachine start ...
	I1115 10:34:57.988137  367608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:34:58.012542  367608 main.go:143] libmachine: Using SSH client type: native
	I1115 10:34:58.012924  367608 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1115 10:34:58.012944  367608 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:34:58.159148  367608 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-026691
	
	I1115 10:34:58.159194  367608 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-026691"
	I1115 10:34:58.159277  367608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:34:58.189206  367608 main.go:143] libmachine: Using SSH client type: native
	I1115 10:34:58.189501  367608 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1115 10:34:58.189523  367608 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-026691 && echo "default-k8s-diff-port-026691" | sudo tee /etc/hostname
	I1115 10:34:58.348350  367608 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-026691
	
	I1115 10:34:58.348454  367608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:34:58.368199  367608 main.go:143] libmachine: Using SSH client type: native
	I1115 10:34:58.368410  367608 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1115 10:34:58.368430  367608 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-026691' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-026691/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-026691' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:34:58.503716  367608 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:34:58.503754  367608 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-55448/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-55448/.minikube}
	I1115 10:34:58.503778  367608 ubuntu.go:190] setting up certificates
	I1115 10:34:58.503791  367608 provision.go:84] configureAuth start
	I1115 10:34:58.503853  367608 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-026691
	I1115 10:34:58.522763  367608 provision.go:143] copyHostCerts
	I1115 10:34:58.522820  367608 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem, removing ...
	I1115 10:34:58.522830  367608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem
	I1115 10:34:58.522904  367608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem (1082 bytes)
	I1115 10:34:58.523027  367608 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem, removing ...
	I1115 10:34:58.523038  367608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem
	I1115 10:34:58.523078  367608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem (1123 bytes)
	I1115 10:34:58.523158  367608 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem, removing ...
	I1115 10:34:58.523169  367608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem
	I1115 10:34:58.523203  367608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem (1679 bytes)
	I1115 10:34:58.523272  367608 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-026691 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-026691 localhost minikube]
	I1115 10:34:58.590090  367608 provision.go:177] copyRemoteCerts
	I1115 10:34:58.590145  367608 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:34:58.590187  367608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:34:58.608644  367608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:34:58.703764  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:34:58.724559  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1115 10:34:58.742665  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:34:58.759994  367608 provision.go:87] duration metric: took 256.187247ms to configureAuth
	I1115 10:34:58.760028  367608 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:34:58.760213  367608 config.go:182] Loaded profile config "default-k8s-diff-port-026691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:34:58.760342  367608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:34:58.778722  367608 main.go:143] libmachine: Using SSH client type: native
	I1115 10:34:58.779014  367608 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1115 10:34:58.779041  367608 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:34:59.033178  367608 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:34:59.033211  367608 machine.go:97] duration metric: took 1.045153146s to provisionDockerMachine
	I1115 10:34:59.033226  367608 client.go:176] duration metric: took 6.391191213s to LocalClient.Create
	I1115 10:34:59.033253  367608 start.go:167] duration metric: took 6.391304318s to libmachine.API.Create "default-k8s-diff-port-026691"
	I1115 10:34:59.033266  367608 start.go:293] postStartSetup for "default-k8s-diff-port-026691" (driver="docker")
	I1115 10:34:59.033285  367608 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:34:59.033376  367608 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:34:59.033438  367608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:34:59.053944  367608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:34:59.157205  367608 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:34:59.161685  367608 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:34:59.161717  367608 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:34:59.161733  367608 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/addons for local assets ...
	I1115 10:34:59.161795  367608 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/files for local assets ...
	I1115 10:34:59.161913  367608 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem -> 589622.pem in /etc/ssl/certs
	I1115 10:34:59.162069  367608 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:34:59.171183  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:34:59.197319  367608 start.go:296] duration metric: took 164.030813ms for postStartSetup
	I1115 10:34:59.197664  367608 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-026691
	I1115 10:34:59.222158  367608 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/config.json ...
	I1115 10:34:59.222456  367608 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:34:59.222508  367608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:34:59.245172  367608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:34:59.338333  367608 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:34:59.342944  367608 start.go:128] duration metric: took 6.710898676s to createHost
	I1115 10:34:59.342984  367608 start.go:83] releasing machines lock for "default-k8s-diff-port-026691", held for 6.711262903s
	I1115 10:34:59.343053  367608 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-026691
	I1115 10:34:59.360891  367608 ssh_runner.go:195] Run: cat /version.json
	I1115 10:34:59.360960  367608 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:34:59.360981  367608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:34:59.361027  367608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:34:59.380703  367608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:34:59.381093  367608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:34:59.543341  367608 ssh_runner.go:195] Run: systemctl --version
	I1115 10:34:59.550150  367608 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:34:59.588663  367608 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:34:59.594351  367608 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:34:59.594425  367608 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:34:59.627965  367608 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1115 10:34:59.627992  367608 start.go:496] detecting cgroup driver to use...
	I1115 10:34:59.628030  367608 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:34:59.628089  367608 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:34:59.644582  367608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:34:59.656945  367608 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:34:59.657016  367608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:34:59.673964  367608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:34:59.698909  367608 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:34:59.793897  367608 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:34:59.897920  367608 docker.go:234] disabling docker service ...
	I1115 10:34:59.898017  367608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:34:59.921681  367608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:34:59.935475  367608 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:35:00.040217  367608 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:35:00.145087  367608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:35:00.157908  367608 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:35:00.172301  367608 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:35:00.172359  367608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:00.185532  367608 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:35:00.185603  367608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:00.195014  367608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:00.204978  367608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:00.216321  367608 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:35:00.224805  367608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:00.233598  367608 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:00.248215  367608 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:00.257523  367608 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:35:00.265789  367608 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:35:00.273509  367608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:00.370097  367608 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:35:00.480383  367608 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:35:00.480459  367608 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:35:00.484506  367608 start.go:564] Will wait 60s for crictl version
	I1115 10:35:00.484571  367608 ssh_runner.go:195] Run: which crictl
	I1115 10:35:00.488156  367608 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:35:00.512458  367608 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:35:00.512546  367608 ssh_runner.go:195] Run: crio --version
	I1115 10:35:00.540995  367608 ssh_runner.go:195] Run: crio --version
	I1115 10:35:00.580705  367608 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:34:57.377717  368849 out.go:252] * Restarting existing docker container for "no-preload-283677" ...
	I1115 10:34:57.377792  368849 cli_runner.go:164] Run: docker start no-preload-283677
	I1115 10:34:57.726123  368849 cli_runner.go:164] Run: docker container inspect no-preload-283677 --format={{.State.Status}}
	I1115 10:34:57.753398  368849 kic.go:430] container "no-preload-283677" state is running.
	I1115 10:34:57.753840  368849 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-283677
	I1115 10:34:57.778603  368849 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/config.json ...
	I1115 10:34:57.778940  368849 machine.go:94] provisionDockerMachine start ...
	I1115 10:34:57.779390  368849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:34:57.804369  368849 main.go:143] libmachine: Using SSH client type: native
	I1115 10:34:57.805107  368849 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I1115 10:34:57.805139  368849 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:34:57.806009  368849 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40268->127.0.0.1:33114: read: connection reset by peer
	I1115 10:35:00.948741  368849 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-283677
	
	I1115 10:35:00.948777  368849 ubuntu.go:182] provisioning hostname "no-preload-283677"
	I1115 10:35:00.948835  368849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:35:00.969578  368849 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:00.969832  368849 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I1115 10:35:00.969850  368849 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-283677 && echo "no-preload-283677" | sudo tee /etc/hostname
	I1115 10:35:01.127681  368849 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-283677
	
	I1115 10:35:01.127767  368849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:35:01.146233  368849 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:01.146580  368849 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I1115 10:35:01.146607  368849 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-283677' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-283677/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-283677' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:35:01.284681  368849 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:35:01.284713  368849 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-55448/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-55448/.minikube}
	I1115 10:35:01.284748  368849 ubuntu.go:190] setting up certificates
	I1115 10:35:01.284762  368849 provision.go:84] configureAuth start
	I1115 10:35:01.284822  368849 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-283677
	I1115 10:35:01.303443  368849 provision.go:143] copyHostCerts
	I1115 10:35:01.303518  368849 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem, removing ...
	I1115 10:35:01.303535  368849 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem
	I1115 10:35:01.303611  368849 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem (1123 bytes)
	I1115 10:35:01.303735  368849 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem, removing ...
	I1115 10:35:01.303747  368849 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem
	I1115 10:35:01.303788  368849 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem (1679 bytes)
	I1115 10:35:01.303897  368849 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem, removing ...
	I1115 10:35:01.303909  368849 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem
	I1115 10:35:01.303945  368849 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem (1082 bytes)
	I1115 10:35:01.304057  368849 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem org=jenkins.no-preload-283677 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-283677]
	I1115 10:35:01.479935  368849 provision.go:177] copyRemoteCerts
	I1115 10:35:01.480049  368849 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:35:01.480102  368849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:35:01.499143  368849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/no-preload-283677/id_rsa Username:docker}
	I1115 10:35:01.593407  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:35:01.611444  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1115 10:35:01.629246  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1115 10:35:01.647087  368849 provision.go:87] duration metric: took 362.308284ms to configureAuth
	I1115 10:35:01.647136  368849 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:35:01.647339  368849 config.go:182] Loaded profile config "no-preload-283677": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:01.647467  368849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:35:01.667372  368849 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:01.667673  368849 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I1115 10:35:01.667695  368849 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:35:01.979196  368849 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:35:01.979228  368849 machine.go:97] duration metric: took 4.200198854s to provisionDockerMachine
	I1115 10:35:01.979281  368849 start.go:293] postStartSetup for "no-preload-283677" (driver="docker")
	I1115 10:35:01.979310  368849 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:35:01.979376  368849 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:35:01.979445  368849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:35:02.006457  368849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/no-preload-283677/id_rsa Username:docker}
	W1115 10:34:58.763972  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	W1115 10:35:00.765899  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	I1115 10:35:00.581817  367608 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-026691 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:35:00.607057  367608 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1115 10:35:00.613228  367608 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:35:00.626466  367608 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-026691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-026691 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:35:00.626625  367608 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:35:00.626700  367608 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:35:00.658108  367608 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:35:00.658131  367608 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:35:00.658175  367608 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:35:00.696481  367608 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:35:00.696507  367608 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:35:00.696517  367608 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1115 10:35:00.696629  367608 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-026691 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-026691 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:35:00.696715  367608 ssh_runner.go:195] Run: crio config
	I1115 10:35:00.744746  367608 cni.go:84] Creating CNI manager for ""
	I1115 10:35:00.744772  367608 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:35:00.744791  367608 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:35:00.744814  367608 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-026691 NodeName:default-k8s-diff-port-026691 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:35:00.744945  367608 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-026691"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:35:00.745029  367608 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:35:00.753434  367608 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:35:00.753504  367608 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:35:00.762137  367608 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1115 10:35:00.775671  367608 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:35:00.797030  367608 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1115 10:35:00.815366  367608 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:35:00.819023  367608 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:35:00.829919  367608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:00.924599  367608 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:35:00.946789  367608 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691 for IP: 192.168.85.2
	I1115 10:35:00.946817  367608 certs.go:195] generating shared ca certs ...
	I1115 10:35:00.946839  367608 certs.go:227] acquiring lock for ca certs: {Name:mkec8c0ab2b564f4fafe1f7fc86089efa994f6db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:00.947089  367608 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key
	I1115 10:35:00.947146  367608 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key
	I1115 10:35:00.947160  367608 certs.go:257] generating profile certs ...
	I1115 10:35:00.947236  367608 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/client.key
	I1115 10:35:00.947253  367608 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/client.crt with IP's: []
	I1115 10:35:01.041305  367608 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/client.crt ...
	I1115 10:35:01.041332  367608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/client.crt: {Name:mk850ac752ca8e1bd96e0112fe9cd33d06ae9831 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:01.041557  367608 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/client.key ...
	I1115 10:35:01.041576  367608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/client.key: {Name:mkc9f22f4d08691fb039bf58ca3696be01b8d2da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:01.041712  367608 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.key.f8824eec
	I1115 10:35:01.041737  367608 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.crt.f8824eec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1115 10:35:01.322559  367608 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.crt.f8824eec ...
	I1115 10:35:01.322598  367608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.crt.f8824eec: {Name:mk3e587e72b06a1c3e15f6608c5003fe07edb847 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:01.322844  367608 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.key.f8824eec ...
	I1115 10:35:01.322868  367608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.key.f8824eec: {Name:mka898e08cb25730cf00e76bc5148d21b3cfc491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:01.323013  367608 certs.go:382] copying /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.crt.f8824eec -> /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.crt
	I1115 10:35:01.323157  367608 certs.go:386] copying /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.key.f8824eec -> /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.key
	I1115 10:35:01.323229  367608 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.key
	I1115 10:35:01.323245  367608 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.crt with IP's: []
	I1115 10:35:01.668272  367608 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.crt ...
	I1115 10:35:01.668297  367608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.crt: {Name:mkd2364b507fdcd0e7075f46fb15018bc571dc50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:01.668447  367608 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.key ...
	I1115 10:35:01.668460  367608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.key: {Name:mk25118b0c3511bad3ea017a869823a0d0c461a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:01.668624  367608 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem (1338 bytes)
	W1115 10:35:01.668657  367608 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962_empty.pem, impossibly tiny 0 bytes
	I1115 10:35:01.668665  367608 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:35:01.668690  367608 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:35:01.668714  367608 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:35:01.668735  367608 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem (1679 bytes)
	I1115 10:35:01.668771  367608 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:35:01.669438  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:35:01.688356  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:35:01.706706  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:35:01.726247  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:35:01.748085  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1115 10:35:01.768285  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 10:35:01.788057  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:35:01.809920  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:35:01.831794  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:35:01.856775  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem --> /usr/share/ca-certificates/58962.pem (1338 bytes)
	I1115 10:35:01.878135  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /usr/share/ca-certificates/589622.pem (1708 bytes)
	I1115 10:35:01.900771  367608 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:35:01.917435  367608 ssh_runner.go:195] Run: openssl version
	I1115 10:35:01.925573  367608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:35:01.937193  367608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:35:01.942570  367608 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:41 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:35:01.942644  367608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:35:01.994260  367608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:35:02.006261  367608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/58962.pem && ln -fs /usr/share/ca-certificates/58962.pem /etc/ssl/certs/58962.pem"
	I1115 10:35:02.017141  367608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/58962.pem
	I1115 10:35:02.021709  367608 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:47 /usr/share/ca-certificates/58962.pem
	I1115 10:35:02.021780  367608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/58962.pem
	I1115 10:35:02.067748  367608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/58962.pem /etc/ssl/certs/51391683.0"
	I1115 10:35:02.078280  367608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/589622.pem && ln -fs /usr/share/ca-certificates/589622.pem /etc/ssl/certs/589622.pem"
	I1115 10:35:02.088732  367608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/589622.pem
	I1115 10:35:02.093398  367608 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:47 /usr/share/ca-certificates/589622.pem
	I1115 10:35:02.093499  367608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/589622.pem
	I1115 10:35:02.141627  367608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/589622.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:35:02.152207  367608 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:35:02.157541  367608 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 10:35:02.157606  367608 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-026691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-026691 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:35:02.157707  367608 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:35:02.157765  367608 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:35:02.193432  367608 cri.go:89] found id: ""
	I1115 10:35:02.193509  367608 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:35:02.203886  367608 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 10:35:02.213132  367608 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 10:35:02.213199  367608 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 10:35:02.223642  367608 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 10:35:02.223664  367608 kubeadm.go:158] found existing configuration files:
	
	I1115 10:35:02.223715  367608 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1115 10:35:02.233048  367608 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 10:35:02.233117  367608 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 10:35:02.242878  367608 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1115 10:35:02.252925  367608 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 10:35:02.253017  367608 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 10:35:02.262094  367608 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1115 10:35:02.272394  367608 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 10:35:02.272467  367608 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 10:35:02.282583  367608 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1115 10:35:02.293280  367608 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 10:35:02.293346  367608 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 10:35:02.303662  367608 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 10:35:02.354565  367608 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 10:35:02.354729  367608 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 10:35:02.385123  367608 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 10:35:02.385201  367608 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1115 10:35:02.385231  367608 kubeadm.go:319] OS: Linux
	I1115 10:35:02.385269  367608 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 10:35:02.385308  367608 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1115 10:35:02.385351  367608 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 10:35:02.385393  367608 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 10:35:02.385433  367608 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 10:35:02.385481  367608 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 10:35:02.385522  367608 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 10:35:02.385561  367608 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 10:35:02.385602  367608 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1115 10:35:02.460034  367608 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 10:35:02.460205  367608 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 10:35:02.460365  367608 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 10:35:02.468539  367608 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1115 10:35:00.333123  358343 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 6.143915574s
	I1115 10:35:00.909515  358343 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.720435889s
	I1115 10:35:02.691418  358343 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.502281905s
	I1115 10:35:02.704108  358343 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 10:35:02.720604  358343 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 10:35:02.737329  358343 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 10:35:02.737599  358343 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-719574 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 10:35:02.747326  358343 kubeadm.go:319] [bootstrap-token] Using token: ob95li.bwu5dbqfa14hsvt0
	I1115 10:35:02.110046  368849 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:35:02.114790  368849 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:35:02.114831  368849 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:35:02.114844  368849 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/addons for local assets ...
	I1115 10:35:02.114898  368849 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/files for local assets ...
	I1115 10:35:02.115028  368849 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem -> 589622.pem in /etc/ssl/certs
	I1115 10:35:02.115160  368849 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:35:02.124610  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:35:02.146846  368849 start.go:296] duration metric: took 167.527166ms for postStartSetup
	I1115 10:35:02.146933  368849 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:35:02.147016  368849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:35:02.169248  368849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/no-preload-283677/id_rsa Username:docker}
	I1115 10:35:02.269154  368849 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:35:02.275482  368849 fix.go:56] duration metric: took 4.92029981s for fixHost
	I1115 10:35:02.275512  368849 start.go:83] releasing machines lock for "no-preload-283677", held for 4.920355261s
	I1115 10:35:02.275586  368849 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-283677
	I1115 10:35:02.298638  368849 ssh_runner.go:195] Run: cat /version.json
	I1115 10:35:02.298698  368849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:35:02.298727  368849 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:35:02.298824  368849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:35:02.322717  368849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/no-preload-283677/id_rsa Username:docker}
	I1115 10:35:02.323463  368849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/no-preload-283677/id_rsa Username:docker}
	I1115 10:35:02.488756  368849 ssh_runner.go:195] Run: systemctl --version
	I1115 10:35:02.497019  368849 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:35:02.536446  368849 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:35:02.541399  368849 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:35:02.541491  368849 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:35:02.549838  368849 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 10:35:02.549867  368849 start.go:496] detecting cgroup driver to use...
	I1115 10:35:02.549905  368849 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:35:02.549977  368849 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:35:02.565514  368849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:35:02.577769  368849 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:35:02.577831  368849 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:35:02.592941  368849 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:35:02.605708  368849 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:35:02.688663  368849 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:35:02.788806  368849 docker.go:234] disabling docker service ...
	I1115 10:35:02.788873  368849 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:35:02.807424  368849 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:35:02.823661  368849 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:35:02.915268  368849 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:35:03.000433  368849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:35:03.014052  368849 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:35:03.029226  368849 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:35:03.029290  368849 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:03.038642  368849 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:35:03.038706  368849 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:03.049065  368849 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:03.058622  368849 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:03.068077  368849 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:35:03.076469  368849 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:03.085644  368849 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:03.094454  368849 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:03.104534  368849 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:35:03.112679  368849 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:35:03.121020  368849 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:03.222503  368849 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:35:03.357676  368849 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:35:03.357737  368849 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:35:03.361906  368849 start.go:564] Will wait 60s for crictl version
	I1115 10:35:03.361977  368849 ssh_runner.go:195] Run: which crictl
	I1115 10:35:03.365723  368849 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:35:03.404943  368849 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:35:03.405117  368849 ssh_runner.go:195] Run: crio --version
	I1115 10:35:03.438126  368849 ssh_runner.go:195] Run: crio --version
	I1115 10:35:03.469166  368849 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:35:02.748656  358343 out.go:252]   - Configuring RBAC rules ...
	I1115 10:35:02.748791  358343 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 10:35:02.753461  358343 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 10:35:02.760294  358343 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 10:35:02.763295  358343 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 10:35:02.766433  358343 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 10:35:02.769201  358343 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 10:35:03.098048  358343 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 10:35:03.518284  358343 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 10:35:04.097647  358343 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 10:35:04.098832  358343 kubeadm.go:319] 
	I1115 10:35:04.098915  358343 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 10:35:04.098925  358343 kubeadm.go:319] 
	I1115 10:35:04.099031  358343 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 10:35:04.099041  358343 kubeadm.go:319] 
	I1115 10:35:04.099069  358343 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 10:35:04.099152  358343 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 10:35:04.099270  358343 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 10:35:04.099293  358343 kubeadm.go:319] 
	I1115 10:35:04.099366  358343 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 10:35:04.099378  358343 kubeadm.go:319] 
	I1115 10:35:04.099446  358343 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 10:35:04.099456  358343 kubeadm.go:319] 
	I1115 10:35:04.099530  358343 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 10:35:04.099646  358343 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 10:35:04.099741  358343 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 10:35:04.099750  358343 kubeadm.go:319] 
	I1115 10:35:04.099881  358343 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 10:35:04.100020  358343 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 10:35:04.100033  358343 kubeadm.go:319] 
	I1115 10:35:04.100148  358343 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ob95li.bwu5dbqfa14hsvt0 \
	I1115 10:35:04.100288  358343 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c958101d3a67868123c5e439a6c1ea6e30a99a6538b76cbe461e2c919cf1aec \
	I1115 10:35:04.100317  358343 kubeadm.go:319] 	--control-plane 
	I1115 10:35:04.100323  358343 kubeadm.go:319] 
	I1115 10:35:04.100427  358343 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 10:35:04.100436  358343 kubeadm.go:319] 
	I1115 10:35:04.100540  358343 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ob95li.bwu5dbqfa14hsvt0 \
	I1115 10:35:04.100692  358343 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c958101d3a67868123c5e439a6c1ea6e30a99a6538b76cbe461e2c919cf1aec 
	I1115 10:35:04.103489  358343 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1115 10:35:04.103671  358343 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1115 10:35:04.103762  358343 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 10:35:04.103783  358343 cni.go:84] Creating CNI manager for ""
	I1115 10:35:04.103792  358343 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:35:04.105369  358343 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1115 10:35:03.470218  368849 cli_runner.go:164] Run: docker network inspect no-preload-283677 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:35:03.490738  368849 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1115 10:35:03.496698  368849 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:35:03.510823  368849 kubeadm.go:884] updating cluster {Name:no-preload-283677 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-283677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:35:03.511006  368849 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:35:03.511057  368849 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:35:03.547890  368849 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:35:03.547916  368849 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:35:03.547926  368849 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1115 10:35:03.548063  368849 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-283677 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-283677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:35:03.548166  368849 ssh_runner.go:195] Run: crio config
	I1115 10:35:03.599181  368849 cni.go:84] Creating CNI manager for ""
	I1115 10:35:03.599206  368849 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:35:03.599223  368849 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:35:03.599244  368849 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-283677 NodeName:no-preload-283677 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:35:03.599372  368849 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-283677"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:35:03.599441  368849 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:35:03.610310  368849 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:35:03.610397  368849 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:35:03.619706  368849 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1115 10:35:03.632722  368849 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:35:03.645918  368849 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1115 10:35:03.658741  368849 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:35:03.662232  368849 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:35:03.671761  368849 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:03.756659  368849 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:35:03.786378  368849 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677 for IP: 192.168.76.2
	I1115 10:35:03.786402  368849 certs.go:195] generating shared ca certs ...
	I1115 10:35:03.786422  368849 certs.go:227] acquiring lock for ca certs: {Name:mkec8c0ab2b564f4fafe1f7fc86089efa994f6db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:03.786604  368849 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key
	I1115 10:35:03.786672  368849 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key
	I1115 10:35:03.786685  368849 certs.go:257] generating profile certs ...
	I1115 10:35:03.786797  368849 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/client.key
	I1115 10:35:03.786865  368849 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/apiserver.key.d18d8ebf
	I1115 10:35:03.786925  368849 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/proxy-client.key
	I1115 10:35:03.787095  368849 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem (1338 bytes)
	W1115 10:35:03.787136  368849 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962_empty.pem, impossibly tiny 0 bytes
	I1115 10:35:03.787149  368849 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:35:03.787190  368849 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:35:03.787228  368849 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:35:03.787263  368849 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem (1679 bytes)
	I1115 10:35:03.787329  368849 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:35:03.788176  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:35:03.809608  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:35:03.829918  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:35:03.850004  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:35:03.882797  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1115 10:35:03.974262  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 10:35:03.996550  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:35:04.017706  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 10:35:04.035832  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem --> /usr/share/ca-certificates/58962.pem (1338 bytes)
	I1115 10:35:04.053680  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /usr/share/ca-certificates/589622.pem (1708 bytes)
	I1115 10:35:04.072674  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:35:04.091110  368849 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:35:04.106710  368849 ssh_runner.go:195] Run: openssl version
	I1115 10:35:04.113684  368849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:35:04.123025  368849 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:35:04.127895  368849 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:41 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:35:04.127949  368849 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:35:04.173742  368849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:35:04.183070  368849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/58962.pem && ln -fs /usr/share/ca-certificates/58962.pem /etc/ssl/certs/58962.pem"
	I1115 10:35:04.192820  368849 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/58962.pem
	I1115 10:35:04.197810  368849 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:47 /usr/share/ca-certificates/58962.pem
	I1115 10:35:04.197877  368849 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/58962.pem
	I1115 10:35:04.238270  368849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/58962.pem /etc/ssl/certs/51391683.0"
	I1115 10:35:04.249044  368849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/589622.pem && ln -fs /usr/share/ca-certificates/589622.pem /etc/ssl/certs/589622.pem"
	I1115 10:35:04.260640  368849 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/589622.pem
	I1115 10:35:04.265573  368849 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:47 /usr/share/ca-certificates/589622.pem
	I1115 10:35:04.265640  368849 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/589622.pem
	I1115 10:35:04.304857  368849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/589622.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:35:04.316678  368849 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:35:04.321538  368849 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 10:35:04.391497  368849 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 10:35:04.568753  368849 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 10:35:04.685855  368849 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 10:35:04.802487  368849 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 10:35:04.896806  368849 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 10:35:05.014514  368849 kubeadm.go:401] StartCluster: {Name:no-preload-283677 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-283677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:35:05.014628  368849 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:35:05.014704  368849 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:35:05.102808  368849 cri.go:89] found id: "324a3ff1cd89d80c17c217faf6db6f2b6c9f52f5abe13f2e83485e1e03b0c7aa"
	I1115 10:35:05.102868  368849 cri.go:89] found id: "8c532dc6e69808afe1a0ffb587f828c0b3f6fa37c71b2fbc5ce4abdafdedf008"
	I1115 10:35:05.102874  368849 cri.go:89] found id: "ac246fc71f81d0fb5e3cd430730c903f2ca388376feb3a8dc321eb565aa6c5ee"
	I1115 10:35:05.102879  368849 cri.go:89] found id: "c26ba954b1e2fc1ea4ef12fc0801c0a31171b23e67cf48fdeb9207cbdb3ba3b0"
	I1115 10:35:05.102883  368849 cri.go:89] found id: ""
	I1115 10:35:05.102973  368849 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 10:35:05.170451  368849 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:35:05Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:35:05.170545  368849 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:35:05.180340  368849 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 10:35:05.180361  368849 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 10:35:05.180411  368849 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 10:35:05.189950  368849 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:35:05.190767  368849 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-283677" does not appear in /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:35:05.192333  368849 kubeconfig.go:62] /home/jenkins/minikube-integration/21894-55448/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-283677" cluster setting kubeconfig missing "no-preload-283677" context setting]
	I1115 10:35:05.193068  368849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:05.194778  368849 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 10:35:05.205108  368849 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1115 10:35:05.205141  368849 kubeadm.go:602] duration metric: took 24.774201ms to restartPrimaryControlPlane
	I1115 10:35:05.205152  368849 kubeadm.go:403] duration metric: took 190.652551ms to StartCluster
	I1115 10:35:05.205176  368849 settings.go:142] acquiring lock: {Name:mk94cfac0f6eef2479181468a7a8082a6e5c8f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:05.205246  368849 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:35:05.206385  368849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:05.206642  368849 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:35:05.207102  368849 config.go:182] Loaded profile config "no-preload-283677": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:05.207057  368849 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:35:05.207165  368849 addons.go:70] Setting storage-provisioner=true in profile "no-preload-283677"
	I1115 10:35:05.207181  368849 addons.go:239] Setting addon storage-provisioner=true in "no-preload-283677"
	W1115 10:35:05.207190  368849 addons.go:248] addon storage-provisioner should already be in state true
	I1115 10:35:05.207190  368849 addons.go:70] Setting dashboard=true in profile "no-preload-283677"
	I1115 10:35:05.207217  368849 addons.go:239] Setting addon dashboard=true in "no-preload-283677"
	I1115 10:35:05.207212  368849 addons.go:70] Setting default-storageclass=true in profile "no-preload-283677"
	W1115 10:35:05.207233  368849 addons.go:248] addon dashboard should already be in state true
	I1115 10:35:05.207275  368849 host.go:66] Checking if "no-preload-283677" exists ...
	I1115 10:35:05.207221  368849 host.go:66] Checking if "no-preload-283677" exists ...
	I1115 10:35:05.207358  368849 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-283677"
	I1115 10:35:05.207703  368849 cli_runner.go:164] Run: docker container inspect no-preload-283677 --format={{.State.Status}}
	I1115 10:35:05.207808  368849 cli_runner.go:164] Run: docker container inspect no-preload-283677 --format={{.State.Status}}
	I1115 10:35:05.207815  368849 cli_runner.go:164] Run: docker container inspect no-preload-283677 --format={{.State.Status}}
	I1115 10:35:05.211477  368849 out.go:179] * Verifying Kubernetes components...
	I1115 10:35:05.213140  368849 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:05.232351  368849 addons.go:239] Setting addon default-storageclass=true in "no-preload-283677"
	W1115 10:35:05.232370  368849 addons.go:248] addon default-storageclass should already be in state true
	I1115 10:35:05.232392  368849 host.go:66] Checking if "no-preload-283677" exists ...
	I1115 10:35:05.232689  368849 cli_runner.go:164] Run: docker container inspect no-preload-283677 --format={{.State.Status}}
	I1115 10:35:05.232981  368849 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:35:05.232986  368849 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1115 10:35:05.234251  368849 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:35:05.234272  368849 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:35:05.234273  368849 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1115 10:35:05.234330  368849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:35:05.238080  368849 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1115 10:35:05.238101  368849 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1115 10:35:05.238157  368849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:35:05.254202  368849 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:35:05.254227  368849 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:35:05.254298  368849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:35:05.257406  368849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/no-preload-283677/id_rsa Username:docker}
	I1115 10:35:05.259999  368849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/no-preload-283677/id_rsa Username:docker}
	I1115 10:35:05.279539  368849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/no-preload-283677/id_rsa Username:docker}
	I1115 10:35:05.585009  368849 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1115 10:35:05.585042  368849 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1115 10:35:05.590684  368849 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:35:05.602650  368849 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1115 10:35:05.602676  368849 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1115 10:35:05.672946  368849 node_ready.go:35] waiting up to 6m0s for node "no-preload-283677" to be "Ready" ...
	I1115 10:35:05.684403  368849 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1115 10:35:05.684432  368849 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1115 10:35:05.690190  368849 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:35:05.692466  368849 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:35:05.769359  368849 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1115 10:35:05.769382  368849 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1115 10:35:05.787603  368849 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1115 10:35:05.787632  368849 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1115 10:35:05.883926  368849 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1115 10:35:05.883964  368849 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1115 10:35:05.974542  368849 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1115 10:35:05.974570  368849 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1115 10:35:05.992886  368849 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1115 10:35:05.992918  368849 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1115 10:35:06.012080  368849 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:35:06.012115  368849 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1115 10:35:06.084770  368849 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1115 10:35:03.268007  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	W1115 10:35:05.764688  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	I1115 10:35:02.470272  367608 out.go:252]   - Generating certificates and keys ...
	I1115 10:35:02.470390  367608 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 10:35:02.470490  367608 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 10:35:02.779536  367608 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 10:35:02.945500  367608 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 10:35:03.605573  367608 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 10:35:03.703228  367608 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 10:35:04.283194  367608 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 10:35:04.283412  367608 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-026691 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1115 10:35:04.682718  367608 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 10:35:04.683098  367608 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-026691 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1115 10:35:05.030500  367608 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 10:35:05.382333  367608 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 10:35:06.139095  367608 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 10:35:06.139385  367608 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 10:35:06.418023  367608 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 10:35:06.723330  367608 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 10:35:07.482824  367608 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 10:35:08.034181  367608 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 10:35:08.156422  367608 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 10:35:08.157215  367608 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 10:35:08.161626  367608 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1115 10:35:04.106620  358343 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 10:35:04.111162  358343 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1115 10:35:04.111192  358343 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 10:35:04.124905  358343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 10:35:04.382718  358343 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 10:35:04.382786  358343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:04.382833  358343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-719574 minikube.k8s.io/updated_at=2025_11_15T10_35_04_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510 minikube.k8s.io/name=embed-certs-719574 minikube.k8s.io/primary=true
	I1115 10:35:04.628907  358343 ops.go:34] apiserver oom_adj: -16
	I1115 10:35:04.629011  358343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:05.129861  358343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:05.629943  358343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:06.129410  358343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:06.629879  358343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:07.129154  358343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:07.630326  358343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:08.129680  358343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:08.629294  358343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:08.746205  358343 kubeadm.go:1114] duration metric: took 4.363478497s to wait for elevateKubeSystemPrivileges
	I1115 10:35:08.746256  358343 kubeadm.go:403] duration metric: took 20.927857879s to StartCluster
	I1115 10:35:08.746281  358343 settings.go:142] acquiring lock: {Name:mk94cfac0f6eef2479181468a7a8082a6e5c8f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:08.746351  358343 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:35:08.748593  358343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:08.748832  358343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 10:35:08.748841  358343 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:35:08.749290  358343 config.go:182] Loaded profile config "embed-certs-719574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:08.749362  358343 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:35:08.749448  358343 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-719574"
	I1115 10:35:08.749468  358343 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-719574"
	I1115 10:35:08.749501  358343 host.go:66] Checking if "embed-certs-719574" exists ...
	I1115 10:35:08.751286  358343 addons.go:70] Setting default-storageclass=true in profile "embed-certs-719574"
	I1115 10:35:08.751326  358343 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-719574"
	I1115 10:35:08.751768  358343 cli_runner.go:164] Run: docker container inspect embed-certs-719574 --format={{.State.Status}}
	I1115 10:35:08.752060  358343 cli_runner.go:164] Run: docker container inspect embed-certs-719574 --format={{.State.Status}}
	I1115 10:35:08.756196  358343 out.go:179] * Verifying Kubernetes components...
	I1115 10:35:08.757464  358343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:08.784018  358343 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:35:08.785232  358343 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:35:08.785253  358343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:35:08.785418  358343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:08.788366  358343 addons.go:239] Setting addon default-storageclass=true in "embed-certs-719574"
	I1115 10:35:08.788420  358343 host.go:66] Checking if "embed-certs-719574" exists ...
	I1115 10:35:08.788915  358343 cli_runner.go:164] Run: docker container inspect embed-certs-719574 --format={{.State.Status}}
	I1115 10:35:08.826800  358343 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:35:08.826832  358343 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:35:08.826903  358343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:08.829210  358343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/embed-certs-719574/id_rsa Username:docker}
	I1115 10:35:08.860334  358343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/embed-certs-719574/id_rsa Username:docker}
	I1115 10:35:07.982406  368849 node_ready.go:49] node "no-preload-283677" is "Ready"
	I1115 10:35:07.982441  368849 node_ready.go:38] duration metric: took 2.309447891s for node "no-preload-283677" to be "Ready" ...
	I1115 10:35:07.982458  368849 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:35:07.982514  368849 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:35:08.305043  368849 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.614815003s)
	I1115 10:35:09.469849  368849 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.777345954s)
	I1115 10:35:09.570449  368849 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.587920258s)
	I1115 10:35:09.570502  368849 api_server.go:72] duration metric: took 4.363836242s to wait for apiserver process to appear ...
	I1115 10:35:09.570512  368849 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:35:09.570533  368849 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:35:09.571399  368849 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.485550405s)
	I1115 10:35:09.577304  368849 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:35:09.577335  368849 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:35:09.615635  368849 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-283677 addons enable metrics-server
	
	I1115 10:35:09.664183  368849 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1115 10:35:09.099411  358343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 10:35:09.117209  358343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:35:09.178025  358343 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:35:09.223129  358343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:35:09.623693  358343 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1115 10:35:10.025397  358343 node_ready.go:35] waiting up to 6m0s for node "embed-certs-719574" to be "Ready" ...
	I1115 10:35:10.035887  358343 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1115 10:35:09.713917  368849 addons.go:515] duration metric: took 4.506959144s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1115 10:35:10.070918  368849 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:35:10.081303  368849 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1115 10:35:10.083487  368849 api_server.go:141] control plane version: v1.34.1
	I1115 10:35:10.083520  368849 api_server.go:131] duration metric: took 513.000945ms to wait for apiserver health ...
	I1115 10:35:10.083532  368849 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:35:10.088663  368849 system_pods.go:59] 8 kube-system pods found
	I1115 10:35:10.088717  368849 system_pods.go:61] "coredns-66bc5c9577-66nkj" [077957ec-b312-4412-a6b1-ae36eb2e7e16] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:35:10.088737  368849 system_pods.go:61] "etcd-no-preload-283677" [bf5ec52e-181c-4b5c-abb2-80ac3fcc26ef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:35:10.088745  368849 system_pods.go:61] "kindnet-x5rwg" [e504759b-46cd-4a41-a8cd-050722131a7d] Running
	I1115 10:35:10.088754  368849 system_pods.go:61] "kube-apiserver-no-preload-283677" [a1c78910-24db-4447-bfb5-f0dd4685d2b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:35:10.088761  368849 system_pods.go:61] "kube-controller-manager-no-preload-283677" [c7c2ba73-517d-48fc-b874-2ab3b653c5a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:35:10.088771  368849 system_pods.go:61] "kube-proxy-vjbxg" [68dffa75-569b-42ef-b4b2-c02a9c1938e7] Running
	I1115 10:35:10.088779  368849 system_pods.go:61] "kube-scheduler-no-preload-283677" [9e0abc54-bc72-4122-b46f-08a74328972d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:35:10.088786  368849 system_pods.go:61] "storage-provisioner" [24222831-4bc3-4c24-87ba-fd523a1e0c85] Running
	I1115 10:35:10.088797  368849 system_pods.go:74] duration metric: took 5.256404ms to wait for pod list to return data ...
	I1115 10:35:10.088807  368849 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:35:10.091629  368849 default_sa.go:45] found service account: "default"
	I1115 10:35:10.091653  368849 default_sa.go:55] duration metric: took 2.838862ms for default service account to be created ...
	I1115 10:35:10.091661  368849 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:35:10.094315  368849 system_pods.go:86] 8 kube-system pods found
	I1115 10:35:10.094343  368849 system_pods.go:89] "coredns-66bc5c9577-66nkj" [077957ec-b312-4412-a6b1-ae36eb2e7e16] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:35:10.094352  368849 system_pods.go:89] "etcd-no-preload-283677" [bf5ec52e-181c-4b5c-abb2-80ac3fcc26ef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:35:10.094358  368849 system_pods.go:89] "kindnet-x5rwg" [e504759b-46cd-4a41-a8cd-050722131a7d] Running
	I1115 10:35:10.094364  368849 system_pods.go:89] "kube-apiserver-no-preload-283677" [a1c78910-24db-4447-bfb5-f0dd4685d2b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:35:10.094370  368849 system_pods.go:89] "kube-controller-manager-no-preload-283677" [c7c2ba73-517d-48fc-b874-2ab3b653c5a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:35:10.094375  368849 system_pods.go:89] "kube-proxy-vjbxg" [68dffa75-569b-42ef-b4b2-c02a9c1938e7] Running
	I1115 10:35:10.094380  368849 system_pods.go:89] "kube-scheduler-no-preload-283677" [9e0abc54-bc72-4122-b46f-08a74328972d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:35:10.094385  368849 system_pods.go:89] "storage-provisioner" [24222831-4bc3-4c24-87ba-fd523a1e0c85] Running
	I1115 10:35:10.094397  368849 system_pods.go:126] duration metric: took 2.730305ms to wait for k8s-apps to be running ...
	I1115 10:35:10.094406  368849 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:35:10.094448  368849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:35:10.111038  368849 system_svc.go:56] duration metric: took 16.619407ms WaitForService to wait for kubelet
	I1115 10:35:10.111085  368849 kubeadm.go:587] duration metric: took 4.90441795s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:35:10.111109  368849 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:35:10.115110  368849 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:35:10.115139  368849 node_conditions.go:123] node cpu capacity is 8
	I1115 10:35:10.115152  368849 node_conditions.go:105] duration metric: took 4.037488ms to run NodePressure ...
	I1115 10:35:10.115164  368849 start.go:242] waiting for startup goroutines ...
	I1115 10:35:10.115171  368849 start.go:247] waiting for cluster config update ...
	I1115 10:35:10.115181  368849 start.go:256] writing updated cluster config ...
	I1115 10:35:10.115423  368849 ssh_runner.go:195] Run: rm -f paused
	I1115 10:35:10.120133  368849 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:35:10.125364  368849 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-66nkj" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 10:35:07.766656  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	W1115 10:35:09.768019  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	W1115 10:35:12.265348  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	I1115 10:35:08.162939  367608 out.go:252]   - Booting up control plane ...
	I1115 10:35:08.163067  367608 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 10:35:08.163214  367608 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 10:35:08.164559  367608 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 10:35:08.191442  367608 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 10:35:08.191597  367608 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1115 10:35:08.204536  367608 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1115 10:35:08.204949  367608 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 10:35:08.205027  367608 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 10:35:08.354479  367608 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1115 10:35:08.354645  367608 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1115 10:35:08.861234  367608 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 506.557584ms
	I1115 10:35:08.866845  367608 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 10:35:08.866999  367608 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1115 10:35:08.867498  367608 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 10:35:08.867607  367608 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1115 10:35:11.654618  367608 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.787069529s
	I1115 10:35:10.037459  358343 addons.go:515] duration metric: took 1.288097776s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1115 10:35:10.128075  358343 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-719574" context rescaled to 1 replicas
	W1115 10:35:12.028773  358343 node_ready.go:57] node "embed-certs-719574" has "Ready":"False" status (will retry)
	I1115 10:35:12.738784  367608 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.871726778s
	I1115 10:35:14.368847  367608 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501769669s
	I1115 10:35:14.386759  367608 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 10:35:14.403947  367608 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 10:35:14.415210  367608 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 10:35:14.415430  367608 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-026691 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 10:35:14.423662  367608 kubeadm.go:319] [bootstrap-token] Using token: la4gix.ai6olk5ks1jiibdz
	I1115 10:35:14.424934  367608 out.go:252]   - Configuring RBAC rules ...
	I1115 10:35:14.425149  367608 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 10:35:14.429405  367608 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 10:35:14.436815  367608 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 10:35:14.440353  367608 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 10:35:14.443132  367608 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 10:35:14.445801  367608 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 10:35:14.780630  367608 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 10:35:15.244870  367608 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 10:35:15.776235  367608 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 10:35:15.777454  367608 kubeadm.go:319] 
	I1115 10:35:15.777560  367608 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 10:35:15.777580  367608 kubeadm.go:319] 
	I1115 10:35:15.777679  367608 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 10:35:15.777709  367608 kubeadm.go:319] 
	I1115 10:35:15.777773  367608 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 10:35:15.777885  367608 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 10:35:15.777990  367608 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 10:35:15.778001  367608 kubeadm.go:319] 
	I1115 10:35:15.778075  367608 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 10:35:15.778084  367608 kubeadm.go:319] 
	I1115 10:35:15.778150  367608 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 10:35:15.778161  367608 kubeadm.go:319] 
	I1115 10:35:15.778232  367608 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 10:35:15.778338  367608 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 10:35:15.778434  367608 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 10:35:15.778441  367608 kubeadm.go:319] 
	I1115 10:35:15.778545  367608 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 10:35:15.778663  367608 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 10:35:15.778670  367608 kubeadm.go:319] 
	I1115 10:35:15.778785  367608 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token la4gix.ai6olk5ks1jiibdz \
	I1115 10:35:15.778928  367608 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c958101d3a67868123c5e439a6c1ea6e30a99a6538b76cbe461e2c919cf1aec \
	I1115 10:35:15.778967  367608 kubeadm.go:319] 	--control-plane 
	I1115 10:35:15.778973  367608 kubeadm.go:319] 
	I1115 10:35:15.779089  367608 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 10:35:15.779096  367608 kubeadm.go:319] 
	I1115 10:35:15.779206  367608 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token la4gix.ai6olk5ks1jiibdz \
	I1115 10:35:15.779345  367608 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c958101d3a67868123c5e439a6c1ea6e30a99a6538b76cbe461e2c919cf1aec 
	I1115 10:35:15.783505  367608 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1115 10:35:15.783826  367608 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1115 10:35:15.784060  367608 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 10:35:15.784094  367608 cni.go:84] Creating CNI manager for ""
	I1115 10:35:15.784108  367608 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:35:15.786778  367608 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1115 10:35:12.132013  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:14.175249  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:16.631763  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:14.265828  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	W1115 10:35:16.764850  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	I1115 10:35:15.788182  367608 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 10:35:15.793094  367608 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1115 10:35:15.793115  367608 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 10:35:15.809048  367608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 10:35:16.098742  367608 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 10:35:16.098819  367608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:16.098855  367608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-026691 minikube.k8s.io/updated_at=2025_11_15T10_35_16_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510 minikube.k8s.io/name=default-k8s-diff-port-026691 minikube.k8s.io/primary=true
	I1115 10:35:16.112393  367608 ops.go:34] apiserver oom_adj: -16
	I1115 10:35:16.271409  367608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:16.771783  367608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:17.271668  367608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1115 10:35:14.029094  358343 node_ready.go:57] node "embed-certs-719574" has "Ready":"False" status (will retry)
	W1115 10:35:16.031395  358343 node_ready.go:57] node "embed-certs-719574" has "Ready":"False" status (will retry)
	W1115 10:35:18.528752  358343 node_ready.go:57] node "embed-certs-719574" has "Ready":"False" status (will retry)
	I1115 10:35:17.772413  367608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:18.271612  367608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:18.772434  367608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:19.272327  367608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:19.771542  367608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:20.271635  367608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:20.771571  367608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:20.950683  367608 kubeadm.go:1114] duration metric: took 4.851926991s to wait for elevateKubeSystemPrivileges
	I1115 10:35:20.950730  367608 kubeadm.go:403] duration metric: took 18.793128713s to StartCluster
	I1115 10:35:20.950755  367608 settings.go:142] acquiring lock: {Name:mk94cfac0f6eef2479181468a7a8082a6e5c8f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:20.950836  367608 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:35:20.954212  367608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:20.954530  367608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 10:35:20.954547  367608 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:35:20.954629  367608 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:35:20.954736  367608 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-026691"
	I1115 10:35:20.954764  367608 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-026691"
	I1115 10:35:20.954792  367608 config.go:182] Loaded profile config "default-k8s-diff-port-026691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:20.954800  367608 host.go:66] Checking if "default-k8s-diff-port-026691" exists ...
	I1115 10:35:20.954806  367608 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-026691"
	I1115 10:35:20.955146  367608 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-026691"
	I1115 10:35:20.955492  367608 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:35:20.955510  367608 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:35:20.956132  367608 out.go:179] * Verifying Kubernetes components...
	I1115 10:35:20.957534  367608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:20.983066  367608 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-026691"
	I1115 10:35:20.983119  367608 host.go:66] Checking if "default-k8s-diff-port-026691" exists ...
	I1115 10:35:20.983674  367608 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:35:20.983883  367608 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:35:20.985223  367608 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:35:20.985248  367608 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:35:20.985304  367608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:35:21.009815  367608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:35:21.012487  367608 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:35:21.012509  367608 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:35:21.012558  367608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:35:21.043532  367608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:35:21.227388  367608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 10:35:21.242981  367608 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:35:21.243543  367608 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:35:21.345690  367608 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:35:21.760244  367608 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1115 10:35:21.984321  367608 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-026691" to be "Ready" ...
	I1115 10:35:21.984944  367608 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1115 10:35:19.130395  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:21.131716  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:19.266357  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	W1115 10:35:21.765103  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	I1115 10:35:21.986234  367608 addons.go:515] duration metric: took 1.031599786s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1115 10:35:22.264566  367608 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-026691" context rescaled to 1 replicas
	I1115 10:35:20.529126  358343 node_ready.go:49] node "embed-certs-719574" is "Ready"
	I1115 10:35:20.529163  358343 node_ready.go:38] duration metric: took 10.503731212s for node "embed-certs-719574" to be "Ready" ...
	I1115 10:35:20.529181  358343 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:35:20.529240  358343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:35:20.545196  358343 api_server.go:72] duration metric: took 11.796320759s to wait for apiserver process to appear ...
	I1115 10:35:20.545225  358343 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:35:20.545247  358343 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1115 10:35:20.549570  358343 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1115 10:35:20.550653  358343 api_server.go:141] control plane version: v1.34.1
	I1115 10:35:20.550677  358343 api_server.go:131] duration metric: took 5.444907ms to wait for apiserver health ...
	I1115 10:35:20.550686  358343 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:35:20.554086  358343 system_pods.go:59] 8 kube-system pods found
	I1115 10:35:20.554122  358343 system_pods.go:61] "coredns-66bc5c9577-fjzk5" [d4d185bc-88ec-4edb-b250-6a59ee426bf5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:35:20.554130  358343 system_pods.go:61] "etcd-embed-certs-719574" [293cb467-4f15-417b-944e-457c3ac7e56f] Running
	I1115 10:35:20.554138  358343 system_pods.go:61] "kindnet-ql2r4" [224e2951-6c97-449d-8ff8-f72aa6d36d60] Running
	I1115 10:35:20.554143  358343 system_pods.go:61] "kube-apiserver-embed-certs-719574" [43a83385-040c-470f-8242-5066617ac8d7] Running
	I1115 10:35:20.554152  358343 system_pods.go:61] "kube-controller-manager-embed-certs-719574" [78f16cd1-774e-4556-9db6-bca54bc8214b] Running
	I1115 10:35:20.554156  358343 system_pods.go:61] "kube-proxy-kmc8c" [3534c76c-4b99-4b84-ba00-21d0d49e770f] Running
	I1115 10:35:20.554161  358343 system_pods.go:61] "kube-scheduler-embed-certs-719574" [9b8d2d78-442d-48cc-88be-29b9eb7017e7] Running
	I1115 10:35:20.554169  358343 system_pods.go:61] "storage-provisioner" [39c3baf2-24de-475e-aeef-a10825991ca3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:35:20.554182  358343 system_pods.go:74] duration metric: took 3.483657ms to wait for pod list to return data ...
	I1115 10:35:20.554197  358343 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:35:20.556665  358343 default_sa.go:45] found service account: "default"
	I1115 10:35:20.556685  358343 default_sa.go:55] duration metric: took 2.480305ms for default service account to be created ...
	I1115 10:35:20.556695  358343 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:35:20.559910  358343 system_pods.go:86] 8 kube-system pods found
	I1115 10:35:20.559938  358343 system_pods.go:89] "coredns-66bc5c9577-fjzk5" [d4d185bc-88ec-4edb-b250-6a59ee426bf5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:35:20.559965  358343 system_pods.go:89] "etcd-embed-certs-719574" [293cb467-4f15-417b-944e-457c3ac7e56f] Running
	I1115 10:35:20.559978  358343 system_pods.go:89] "kindnet-ql2r4" [224e2951-6c97-449d-8ff8-f72aa6d36d60] Running
	I1115 10:35:20.559986  358343 system_pods.go:89] "kube-apiserver-embed-certs-719574" [43a83385-040c-470f-8242-5066617ac8d7] Running
	I1115 10:35:20.559993  358343 system_pods.go:89] "kube-controller-manager-embed-certs-719574" [78f16cd1-774e-4556-9db6-bca54bc8214b] Running
	I1115 10:35:20.560001  358343 system_pods.go:89] "kube-proxy-kmc8c" [3534c76c-4b99-4b84-ba00-21d0d49e770f] Running
	I1115 10:35:20.560007  358343 system_pods.go:89] "kube-scheduler-embed-certs-719574" [9b8d2d78-442d-48cc-88be-29b9eb7017e7] Running
	I1115 10:35:20.560018  358343 system_pods.go:89] "storage-provisioner" [39c3baf2-24de-475e-aeef-a10825991ca3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:35:20.560051  358343 retry.go:31] will retry after 304.306696ms: missing components: kube-dns
	I1115 10:35:20.869745  358343 system_pods.go:86] 8 kube-system pods found
	I1115 10:35:20.870073  358343 system_pods.go:89] "coredns-66bc5c9577-fjzk5" [d4d185bc-88ec-4edb-b250-6a59ee426bf5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:35:20.870105  358343 system_pods.go:89] "etcd-embed-certs-719574" [293cb467-4f15-417b-944e-457c3ac7e56f] Running
	I1115 10:35:20.870140  358343 system_pods.go:89] "kindnet-ql2r4" [224e2951-6c97-449d-8ff8-f72aa6d36d60] Running
	I1115 10:35:20.870174  358343 system_pods.go:89] "kube-apiserver-embed-certs-719574" [43a83385-040c-470f-8242-5066617ac8d7] Running
	I1115 10:35:20.870205  358343 system_pods.go:89] "kube-controller-manager-embed-certs-719574" [78f16cd1-774e-4556-9db6-bca54bc8214b] Running
	I1115 10:35:20.870223  358343 system_pods.go:89] "kube-proxy-kmc8c" [3534c76c-4b99-4b84-ba00-21d0d49e770f] Running
	I1115 10:35:20.870251  358343 system_pods.go:89] "kube-scheduler-embed-certs-719574" [9b8d2d78-442d-48cc-88be-29b9eb7017e7] Running
	I1115 10:35:20.870298  358343 system_pods.go:89] "storage-provisioner" [39c3baf2-24de-475e-aeef-a10825991ca3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:35:20.870334  358343 retry.go:31] will retry after 263.535875ms: missing components: kube-dns
	I1115 10:35:21.139822  358343 system_pods.go:86] 8 kube-system pods found
	I1115 10:35:21.139860  358343 system_pods.go:89] "coredns-66bc5c9577-fjzk5" [d4d185bc-88ec-4edb-b250-6a59ee426bf5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:35:21.139867  358343 system_pods.go:89] "etcd-embed-certs-719574" [293cb467-4f15-417b-944e-457c3ac7e56f] Running
	I1115 10:35:21.139875  358343 system_pods.go:89] "kindnet-ql2r4" [224e2951-6c97-449d-8ff8-f72aa6d36d60] Running
	I1115 10:35:21.139879  358343 system_pods.go:89] "kube-apiserver-embed-certs-719574" [43a83385-040c-470f-8242-5066617ac8d7] Running
	I1115 10:35:21.139885  358343 system_pods.go:89] "kube-controller-manager-embed-certs-719574" [78f16cd1-774e-4556-9db6-bca54bc8214b] Running
	I1115 10:35:21.139896  358343 system_pods.go:89] "kube-proxy-kmc8c" [3534c76c-4b99-4b84-ba00-21d0d49e770f] Running
	I1115 10:35:21.139902  358343 system_pods.go:89] "kube-scheduler-embed-certs-719574" [9b8d2d78-442d-48cc-88be-29b9eb7017e7] Running
	I1115 10:35:21.139910  358343 system_pods.go:89] "storage-provisioner" [39c3baf2-24de-475e-aeef-a10825991ca3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:35:21.139934  358343 retry.go:31] will retry after 299.264282ms: missing components: kube-dns
	I1115 10:35:21.445165  358343 system_pods.go:86] 8 kube-system pods found
	I1115 10:35:21.445282  358343 system_pods.go:89] "coredns-66bc5c9577-fjzk5" [d4d185bc-88ec-4edb-b250-6a59ee426bf5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:35:21.445340  358343 system_pods.go:89] "etcd-embed-certs-719574" [293cb467-4f15-417b-944e-457c3ac7e56f] Running
	I1115 10:35:21.445350  358343 system_pods.go:89] "kindnet-ql2r4" [224e2951-6c97-449d-8ff8-f72aa6d36d60] Running
	I1115 10:35:21.445355  358343 system_pods.go:89] "kube-apiserver-embed-certs-719574" [43a83385-040c-470f-8242-5066617ac8d7] Running
	I1115 10:35:21.445361  358343 system_pods.go:89] "kube-controller-manager-embed-certs-719574" [78f16cd1-774e-4556-9db6-bca54bc8214b] Running
	I1115 10:35:21.445366  358343 system_pods.go:89] "kube-proxy-kmc8c" [3534c76c-4b99-4b84-ba00-21d0d49e770f] Running
	I1115 10:35:21.445371  358343 system_pods.go:89] "kube-scheduler-embed-certs-719574" [9b8d2d78-442d-48cc-88be-29b9eb7017e7] Running
	I1115 10:35:21.445392  358343 system_pods.go:89] "storage-provisioner" [39c3baf2-24de-475e-aeef-a10825991ca3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:35:21.445412  358343 retry.go:31] will retry after 557.501681ms: missing components: kube-dns
	I1115 10:35:22.008757  358343 system_pods.go:86] 8 kube-system pods found
	I1115 10:35:22.008809  358343 system_pods.go:89] "coredns-66bc5c9577-fjzk5" [d4d185bc-88ec-4edb-b250-6a59ee426bf5] Running
	I1115 10:35:22.008817  358343 system_pods.go:89] "etcd-embed-certs-719574" [293cb467-4f15-417b-944e-457c3ac7e56f] Running
	I1115 10:35:22.008823  358343 system_pods.go:89] "kindnet-ql2r4" [224e2951-6c97-449d-8ff8-f72aa6d36d60] Running
	I1115 10:35:22.008830  358343 system_pods.go:89] "kube-apiserver-embed-certs-719574" [43a83385-040c-470f-8242-5066617ac8d7] Running
	I1115 10:35:22.008841  358343 system_pods.go:89] "kube-controller-manager-embed-certs-719574" [78f16cd1-774e-4556-9db6-bca54bc8214b] Running
	I1115 10:35:22.008847  358343 system_pods.go:89] "kube-proxy-kmc8c" [3534c76c-4b99-4b84-ba00-21d0d49e770f] Running
	I1115 10:35:22.008856  358343 system_pods.go:89] "kube-scheduler-embed-certs-719574" [9b8d2d78-442d-48cc-88be-29b9eb7017e7] Running
	I1115 10:35:22.008861  358343 system_pods.go:89] "storage-provisioner" [39c3baf2-24de-475e-aeef-a10825991ca3] Running
	I1115 10:35:22.008871  358343 system_pods.go:126] duration metric: took 1.452168821s to wait for k8s-apps to be running ...
	I1115 10:35:22.008883  358343 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:35:22.008946  358343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:35:22.026719  358343 system_svc.go:56] duration metric: took 17.821769ms WaitForService to wait for kubelet
	I1115 10:35:22.026753  358343 kubeadm.go:587] duration metric: took 13.277885015s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:35:22.026782  358343 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:35:22.030378  358343 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:35:22.030411  358343 node_conditions.go:123] node cpu capacity is 8
	I1115 10:35:22.030431  358343 node_conditions.go:105] duration metric: took 3.642261ms to run NodePressure ...
	I1115 10:35:22.030455  358343 start.go:242] waiting for startup goroutines ...
	I1115 10:35:22.030468  358343 start.go:247] waiting for cluster config update ...
	I1115 10:35:22.030481  358343 start.go:256] writing updated cluster config ...
	I1115 10:35:22.030818  358343 ssh_runner.go:195] Run: rm -f paused
	I1115 10:35:22.035757  358343 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:35:22.039154  358343 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fjzk5" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:22.043361  358343 pod_ready.go:94] pod "coredns-66bc5c9577-fjzk5" is "Ready"
	I1115 10:35:22.043378  358343 pod_ready.go:86] duration metric: took 4.200087ms for pod "coredns-66bc5c9577-fjzk5" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:22.045206  358343 pod_ready.go:83] waiting for pod "etcd-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:22.049024  358343 pod_ready.go:94] pod "etcd-embed-certs-719574" is "Ready"
	I1115 10:35:22.049042  358343 pod_ready.go:86] duration metric: took 3.816184ms for pod "etcd-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:22.050972  358343 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:22.054609  358343 pod_ready.go:94] pod "kube-apiserver-embed-certs-719574" is "Ready"
	I1115 10:35:22.054632  358343 pod_ready.go:86] duration metric: took 3.638655ms for pod "kube-apiserver-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:22.056558  358343 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:22.439711  358343 pod_ready.go:94] pod "kube-controller-manager-embed-certs-719574" is "Ready"
	I1115 10:35:22.439736  358343 pod_ready.go:86] duration metric: took 383.156713ms for pod "kube-controller-manager-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:22.640073  358343 pod_ready.go:83] waiting for pod "kube-proxy-kmc8c" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:23.040176  358343 pod_ready.go:94] pod "kube-proxy-kmc8c" is "Ready"
	I1115 10:35:23.040208  358343 pod_ready.go:86] duration metric: took 400.110752ms for pod "kube-proxy-kmc8c" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:23.240598  358343 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:23.639521  358343 pod_ready.go:94] pod "kube-scheduler-embed-certs-719574" is "Ready"
	I1115 10:35:23.639548  358343 pod_ready.go:86] duration metric: took 398.923501ms for pod "kube-scheduler-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:23.639560  358343 pod_ready.go:40] duration metric: took 1.603769873s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:35:23.683447  358343 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:35:23.685103  358343 out.go:179] * Done! kubectl is now configured to use "embed-certs-719574" cluster and "default" namespace by default
	W1115 10:35:23.630738  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:25.631024  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:24.264211  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	W1115 10:35:26.763775  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	W1115 10:35:23.987891  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	W1115 10:35:25.988063  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	W1115 10:35:29.264506  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	I1115 10:35:30.263692  361423 pod_ready.go:94] pod "coredns-5dd5756b68-bdpfv" is "Ready"
	I1115 10:35:30.263719  361423 pod_ready.go:86] duration metric: took 33.505250042s for pod "coredns-5dd5756b68-bdpfv" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:30.266346  361423 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-087235" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:30.270190  361423 pod_ready.go:94] pod "etcd-old-k8s-version-087235" is "Ready"
	I1115 10:35:30.270213  361423 pod_ready.go:86] duration metric: took 3.846822ms for pod "etcd-old-k8s-version-087235" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:30.272557  361423 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-087235" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:30.276198  361423 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-087235" is "Ready"
	I1115 10:35:30.276215  361423 pod_ready.go:86] duration metric: took 3.640479ms for pod "kube-apiserver-old-k8s-version-087235" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:30.278541  361423 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-087235" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:30.461598  361423 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-087235" is "Ready"
	I1115 10:35:30.461629  361423 pod_ready.go:86] duration metric: took 183.068971ms for pod "kube-controller-manager-old-k8s-version-087235" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:30.662428  361423 pod_ready.go:83] waiting for pod "kube-proxy-gl22j" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:31.062369  361423 pod_ready.go:94] pod "kube-proxy-gl22j" is "Ready"
	I1115 10:35:31.062396  361423 pod_ready.go:86] duration metric: took 399.946151ms for pod "kube-proxy-gl22j" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:31.263048  361423 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-087235" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:31.662025  361423 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-087235" is "Ready"
	I1115 10:35:31.662055  361423 pod_ready.go:86] duration metric: took 398.980765ms for pod "kube-scheduler-old-k8s-version-087235" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:31.662070  361423 pod_ready.go:40] duration metric: took 34.909342767s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:35:31.706606  361423 start.go:628] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1115 10:35:31.708343  361423 out.go:203] 
	W1115 10:35:31.709588  361423 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1115 10:35:31.710764  361423 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1115 10:35:31.711983  361423 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-087235" cluster and "default" namespace by default
	W1115 10:35:28.131245  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:30.131470  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:28.487367  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	W1115 10:35:30.987373  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 15 10:35:20 embed-certs-719574 crio[897]: time="2025-11-15T10:35:20.881360705Z" level=info msg="Created container 12256b2144025367f16b51be552bf1a401f3d6f7b4c95b2141f90c764e30fa9b: kube-system/coredns-66bc5c9577-fjzk5/coredns" id=d4452e93-9f07-4603-85b6-4ad003b07bf3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:35:20 embed-certs-719574 crio[897]: time="2025-11-15T10:35:20.882630765Z" level=info msg="Starting container: 12256b2144025367f16b51be552bf1a401f3d6f7b4c95b2141f90c764e30fa9b" id=ad1c3f5b-b3d3-4ca1-a4d2-38584b2e68fe name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:35:20 embed-certs-719574 crio[897]: time="2025-11-15T10:35:20.88552862Z" level=info msg="Started container" PID=1987 containerID=12256b2144025367f16b51be552bf1a401f3d6f7b4c95b2141f90c764e30fa9b description=kube-system/coredns-66bc5c9577-fjzk5/coredns id=ad1c3f5b-b3d3-4ca1-a4d2-38584b2e68fe name=/runtime.v1.RuntimeService/StartContainer sandboxID=32636a7a619970b596d0dd9df446b6433ec29f17196bf8c52ae7b2390eabeb72
	Nov 15 10:35:24 embed-certs-719574 crio[897]: time="2025-11-15T10:35:24.13311361Z" level=info msg="Running pod sandbox: default/busybox/POD" id=fe207227-e6c6-4315-a3c7-80dea5614fe1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:35:24 embed-certs-719574 crio[897]: time="2025-11-15T10:35:24.133232491Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:35:24 embed-certs-719574 crio[897]: time="2025-11-15T10:35:24.138301994Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a9532dc8faadbd4ca07ea2c1ff986d91b0012d0056918a6ac03d26b9638d1986 UID:dc1f55ba-efa8-4823-b9a0-0c2cd11a020d NetNS:/var/run/netns/b2b8b0ad-d616-46a6-9ce0-059c4aa0c7af Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009b6340}] Aliases:map[]}"
	Nov 15 10:35:24 embed-certs-719574 crio[897]: time="2025-11-15T10:35:24.138332764Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 15 10:35:24 embed-certs-719574 crio[897]: time="2025-11-15T10:35:24.148376042Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a9532dc8faadbd4ca07ea2c1ff986d91b0012d0056918a6ac03d26b9638d1986 UID:dc1f55ba-efa8-4823-b9a0-0c2cd11a020d NetNS:/var/run/netns/b2b8b0ad-d616-46a6-9ce0-059c4aa0c7af Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009b6340}] Aliases:map[]}"
	Nov 15 10:35:24 embed-certs-719574 crio[897]: time="2025-11-15T10:35:24.14850403Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 15 10:35:24 embed-certs-719574 crio[897]: time="2025-11-15T10:35:24.151677904Z" level=info msg="Ran pod sandbox a9532dc8faadbd4ca07ea2c1ff986d91b0012d0056918a6ac03d26b9638d1986 with infra container: default/busybox/POD" id=fe207227-e6c6-4315-a3c7-80dea5614fe1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:35:24 embed-certs-719574 crio[897]: time="2025-11-15T10:35:24.152918804Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dd08ad63-c60f-44c9-b6d4-c772b83752cf name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:35:24 embed-certs-719574 crio[897]: time="2025-11-15T10:35:24.153087609Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=dd08ad63-c60f-44c9-b6d4-c772b83752cf name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:35:24 embed-certs-719574 crio[897]: time="2025-11-15T10:35:24.153124873Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=dd08ad63-c60f-44c9-b6d4-c772b83752cf name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:35:24 embed-certs-719574 crio[897]: time="2025-11-15T10:35:24.153866964Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0c135570-8656-4d75-a76e-de1dbe6ac1c9 name=/runtime.v1.ImageService/PullImage
	Nov 15 10:35:24 embed-certs-719574 crio[897]: time="2025-11-15T10:35:24.157605331Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 15 10:35:28 embed-certs-719574 crio[897]: time="2025-11-15T10:35:28.464588676Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=0c135570-8656-4d75-a76e-de1dbe6ac1c9 name=/runtime.v1.ImageService/PullImage
	Nov 15 10:35:28 embed-certs-719574 crio[897]: time="2025-11-15T10:35:28.46540335Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8329a44e-4e0d-4bc1-9c06-7e9246a47ba6 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:35:28 embed-certs-719574 crio[897]: time="2025-11-15T10:35:28.466757947Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1378f5c7-1dba-4328-925f-08396b596d96 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:35:28 embed-certs-719574 crio[897]: time="2025-11-15T10:35:28.470021771Z" level=info msg="Creating container: default/busybox/busybox" id=e6df161a-9b82-4923-8b8e-634119331018 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:35:28 embed-certs-719574 crio[897]: time="2025-11-15T10:35:28.470157148Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:35:28 embed-certs-719574 crio[897]: time="2025-11-15T10:35:28.474181817Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:35:28 embed-certs-719574 crio[897]: time="2025-11-15T10:35:28.474585423Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:35:28 embed-certs-719574 crio[897]: time="2025-11-15T10:35:28.502128353Z" level=info msg="Created container af80b79795842827171e8df8b786d972b3ef0bd53ffe1cdf33eb1a8767eed5be: default/busybox/busybox" id=e6df161a-9b82-4923-8b8e-634119331018 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:35:28 embed-certs-719574 crio[897]: time="2025-11-15T10:35:28.502671282Z" level=info msg="Starting container: af80b79795842827171e8df8b786d972b3ef0bd53ffe1cdf33eb1a8767eed5be" id=f87121f7-6f95-4e3c-b956-0551890de931 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:35:28 embed-certs-719574 crio[897]: time="2025-11-15T10:35:28.50433643Z" level=info msg="Started container" PID=2057 containerID=af80b79795842827171e8df8b786d972b3ef0bd53ffe1cdf33eb1a8767eed5be description=default/busybox/busybox id=f87121f7-6f95-4e3c-b956-0551890de931 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a9532dc8faadbd4ca07ea2c1ff986d91b0012d0056918a6ac03d26b9638d1986
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	af80b79795842       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   a9532dc8faadb       busybox                                      default
	12256b2144025       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      15 seconds ago      Running             coredns                   0                   32636a7a61997       coredns-66bc5c9577-fjzk5                     kube-system
	0267ac3cbc52f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      15 seconds ago      Running             storage-provisioner       0                   b8f8d55870ccd       storage-provisioner                          kube-system
	ea8bb769be1bb       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      26 seconds ago      Running             kube-proxy                0                   0b801da817dbb       kube-proxy-kmc8c                             kube-system
	440d8df3fe277       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      26 seconds ago      Running             kindnet-cni               0                   f362c33e66831       kindnet-ql2r4                                kube-system
	9f2a167895316       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      39 seconds ago      Running             etcd                      0                   e06266de1d4a8       etcd-embed-certs-719574                      kube-system
	b0f2d129523a7       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      39 seconds ago      Running             kube-apiserver            0                   0913bae81d67d       kube-apiserver-embed-certs-719574            kube-system
	d993a675c2933       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      39 seconds ago      Running             kube-scheduler            0                   501431f856cbc       kube-scheduler-embed-certs-719574            kube-system
	74bc985bac5de       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      39 seconds ago      Running             kube-controller-manager   0                   2130f93069abb       kube-controller-manager-embed-certs-719574   kube-system
	
	
	==> coredns [12256b2144025367f16b51be552bf1a401f3d6f7b4c95b2141f90c764e30fa9b] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60541 - 49314 "HINFO IN 8528946146436153821.1323968510477079721. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.01329237s
	
	
	==> describe nodes <==
	Name:               embed-certs-719574
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-719574
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=embed-certs-719574
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_35_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:35:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-719574
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:35:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:35:34 +0000   Sat, 15 Nov 2025 10:34:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:35:34 +0000   Sat, 15 Nov 2025 10:34:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:35:34 +0000   Sat, 15 Nov 2025 10:34:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:35:34 +0000   Sat, 15 Nov 2025 10:35:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-719574
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                4a98aacb-8676-41cf-a57c-20957fa3757b
	  Boot ID:                    6a33f504-3a8c-4d49-8fc5-301cf5275efc
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-66bc5c9577-fjzk5                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-embed-certs-719574                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         35s
	  kube-system                 kindnet-ql2r4                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-embed-certs-719574             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-embed-certs-719574    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-kmc8c                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-embed-certs-719574             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 26s                kube-proxy       
	  Warning  CgroupV1                 43s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  42s (x9 over 43s)  kubelet          Node embed-certs-719574 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    42s (x8 over 43s)  kubelet          Node embed-certs-719574 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     42s (x7 over 43s)  kubelet          Node embed-certs-719574 status is now: NodeHasSufficientPID
	  Normal   Starting                 33s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 33s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  33s                kubelet          Node embed-certs-719574 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    33s                kubelet          Node embed-certs-719574 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     33s                kubelet          Node embed-certs-719574 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           29s                node-controller  Node embed-certs-719574 event: Registered Node embed-certs-719574 in Controller
	  Normal   NodeReady                16s                kubelet          Node embed-certs-719574 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 7f 53 f9 15 93 08 06
	[  +0.017821] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff c6 66 25 e9 45 28 08 06
	[ +19.178294] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 a3 65 b9 28 15 08 06
	[  +0.347929] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 6d df 33 a5 ad 08 06
	[  +0.000319] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff a2 7f 53 f9 15 93 08 06
	[Nov15 10:33] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 96 ed 10 e8 9a 80 08 06
	[  +0.000481] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 a3 65 b9 28 15 08 06
	[ +35.214217] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 c2 9a 56 52 0c 08 06
	[  +9.104720] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 c2 c4 d1 7c dd 08 06
	[Nov15 10:34] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 68 1e 80 53 05 08 06
	[  +0.000444] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 c2 c4 d1 7c dd 08 06
	[ +18.836046] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 98 db 78 4f 0b 08 06
	[  +0.000708] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 c2 9a 56 52 0c 08 06
	
	
	==> etcd [9f2a167895316a2aeaf9c608e6a1c1c53ba366549d797d13e4bcd934a7a1f01f] <==
	{"level":"warn","ts":"2025-11-15T10:34:59.692255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:59.700223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:59.710661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:59.719005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:59.736569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:59.796630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:59.804047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:59.811332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:59.836543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:59.885945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:59.893781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:59.901583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:59.909220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:59.915930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:59.922558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:59.979261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:59.987930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:59.995468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:00.003474Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:00.011365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:00.017918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:00.083943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:00.090498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:00.096963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:00.197982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49172","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:35:36 up  2:17,  0 user,  load average: 4.74, 4.55, 2.79
	Linux embed-certs-719574 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [440d8df3fe27709fc68d9d7bd5a971346066ea299d4dc9a54fcc9c45e2d705d6] <==
	I1115 10:35:10.137763       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:35:10.138079       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1115 10:35:10.138266       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:35:10.138283       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:35:10.138305       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:35:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:35:10.344417       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:35:10.344452       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:35:10.344466       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:35:10.346528       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1115 10:35:10.745449       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:35:10.745489       1 metrics.go:72] Registering metrics
	I1115 10:35:10.745585       1 controller.go:711] "Syncing nftables rules"
	I1115 10:35:20.350202       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1115 10:35:20.350254       1 main.go:301] handling current node
	I1115 10:35:30.344720       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1115 10:35:30.344757       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b0f2d129523a7cd72452d5d134c239a2e30c22154c8b891b8bc295005dc44f52] <==
	I1115 10:35:00.981865       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:35:00.985881       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:35:00.986010       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 10:35:00.986179       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1115 10:35:00.987449       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1115 10:35:00.987462       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1115 10:35:01.031652       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 10:35:01.786361       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1115 10:35:01.790370       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1115 10:35:01.790392       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:35:02.346177       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:35:02.393842       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:35:02.490743       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1115 10:35:02.496762       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1115 10:35:02.498002       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:35:02.503390       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 10:35:02.795844       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 10:35:03.508172       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 10:35:03.517362       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1115 10:35:03.527454       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1115 10:35:08.501378       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 10:35:08.550835       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:35:08.557816       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:35:08.600079       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1115 10:35:34.925309       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:52316: use of closed network connection
	
	
	==> kube-controller-manager [74bc985bac5ded0ba874f60c4a5532050df083ce4a8e30fc3566f8e121ce83e4] <==
	I1115 10:35:07.798745       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 10:35:07.798763       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1115 10:35:07.798790       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1115 10:35:07.799007       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1115 10:35:07.799085       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1115 10:35:07.799127       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 10:35:07.799187       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1115 10:35:07.799714       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1115 10:35:07.799772       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1115 10:35:07.801915       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1115 10:35:07.801988       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 10:35:07.805736       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1115 10:35:07.805803       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 10:35:07.807191       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:35:07.808425       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-719574" podCIDRs=["10.244.0.0/24"]
	I1115 10:35:07.810976       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 10:35:07.812178       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:35:07.812188       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1115 10:35:07.814381       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1115 10:35:07.823830       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:35:07.823853       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 10:35:07.823870       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1115 10:35:07.835552       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:35:07.836641       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 10:35:22.798907       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ea8bb769be1bb1c6af3389d216e90997c52022b6f3133bb6cb3a7a34ca34421e] <==
	I1115 10:35:10.015650       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:35:10.106930       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:35:10.207063       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:35:10.207125       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1115 10:35:10.207259       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:35:10.230090       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:35:10.230172       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:35:10.237025       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:35:10.238543       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:35:10.238583       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:35:10.240838       1 config.go:200] "Starting service config controller"
	I1115 10:35:10.240897       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:35:10.240943       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:35:10.240995       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:35:10.241038       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:35:10.241062       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:35:10.241119       1 config.go:309] "Starting node config controller"
	I1115 10:35:10.241146       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:35:10.241169       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:35:10.341061       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 10:35:10.341068       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 10:35:10.341110       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [d993a675c2933b24e279b4fea90b2a937879ad2b9c6ad69a338fedbef96d07a3] <==
	E1115 10:35:00.907559       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 10:35:00.907721       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 10:35:00.907803       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 10:35:00.908568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 10:35:00.908722       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 10:35:00.908721       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 10:35:00.908741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 10:35:00.908810       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 10:35:00.908852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 10:35:00.908875       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 10:35:00.908937       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 10:35:00.908937       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 10:35:00.909033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 10:35:00.976829       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 10:35:00.977424       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 10:35:00.977641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 10:35:01.738921       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 10:35:01.754503       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 10:35:01.794768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 10:35:01.813521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 10:35:01.837177       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 10:35:01.898775       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 10:35:02.040477       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 10:35:02.066980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1115 10:35:02.505201       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 10:35:08 embed-certs-719574 kubelet[1468]: I1115 10:35:08.711130    1468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5l8k\" (UniqueName: \"kubernetes.io/projected/224e2951-6c97-449d-8ff8-f72aa6d36d60-kube-api-access-f5l8k\") pod \"kindnet-ql2r4\" (UID: \"224e2951-6c97-449d-8ff8-f72aa6d36d60\") " pod="kube-system/kindnet-ql2r4"
	Nov 15 10:35:08 embed-certs-719574 kubelet[1468]: I1115 10:35:08.711218    1468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3534c76c-4b99-4b84-ba00-21d0d49e770f-lib-modules\") pod \"kube-proxy-kmc8c\" (UID: \"3534c76c-4b99-4b84-ba00-21d0d49e770f\") " pod="kube-system/kube-proxy-kmc8c"
	Nov 15 10:35:08 embed-certs-719574 kubelet[1468]: I1115 10:35:08.711441    1468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/224e2951-6c97-449d-8ff8-f72aa6d36d60-lib-modules\") pod \"kindnet-ql2r4\" (UID: \"224e2951-6c97-449d-8ff8-f72aa6d36d60\") " pod="kube-system/kindnet-ql2r4"
	Nov 15 10:35:08 embed-certs-719574 kubelet[1468]: E1115 10:35:08.842236    1468 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 15 10:35:08 embed-certs-719574 kubelet[1468]: E1115 10:35:08.842301    1468 projected.go:196] Error preparing data for projected volume kube-api-access-fmqb5 for pod kube-system/kube-proxy-kmc8c: configmap "kube-root-ca.crt" not found
	Nov 15 10:35:08 embed-certs-719574 kubelet[1468]: E1115 10:35:08.842441    1468 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3534c76c-4b99-4b84-ba00-21d0d49e770f-kube-api-access-fmqb5 podName:3534c76c-4b99-4b84-ba00-21d0d49e770f nodeName:}" failed. No retries permitted until 2025-11-15 10:35:09.34240248 +0000 UTC m=+6.079750204 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fmqb5" (UniqueName: "kubernetes.io/projected/3534c76c-4b99-4b84-ba00-21d0d49e770f-kube-api-access-fmqb5") pod "kube-proxy-kmc8c" (UID: "3534c76c-4b99-4b84-ba00-21d0d49e770f") : configmap "kube-root-ca.crt" not found
	Nov 15 10:35:08 embed-certs-719574 kubelet[1468]: E1115 10:35:08.879565    1468 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 15 10:35:08 embed-certs-719574 kubelet[1468]: E1115 10:35:08.879614    1468 projected.go:196] Error preparing data for projected volume kube-api-access-f5l8k for pod kube-system/kindnet-ql2r4: configmap "kube-root-ca.crt" not found
	Nov 15 10:35:08 embed-certs-719574 kubelet[1468]: E1115 10:35:08.881018    1468 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/224e2951-6c97-449d-8ff8-f72aa6d36d60-kube-api-access-f5l8k podName:224e2951-6c97-449d-8ff8-f72aa6d36d60 nodeName:}" failed. No retries permitted until 2025-11-15 10:35:09.379680059 +0000 UTC m=+6.117027782 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-f5l8k" (UniqueName: "kubernetes.io/projected/224e2951-6c97-449d-8ff8-f72aa6d36d60-kube-api-access-f5l8k") pod "kindnet-ql2r4" (UID: "224e2951-6c97-449d-8ff8-f72aa6d36d60") : configmap "kube-root-ca.crt" not found
	Nov 15 10:35:09 embed-certs-719574 kubelet[1468]: W1115 10:35:09.681418    1468 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/77b854d73395a6882ad846c6f51d5c84a63e9af66dc48bfe4bdf582432e5fa0b/crio-f362c33e668317c8f6f40e50c24230b832f2a226193bf717c5e20519920f9a9f WatchSource:0}: Error finding container f362c33e668317c8f6f40e50c24230b832f2a226193bf717c5e20519920f9a9f: Status 404 returned error can't find the container with id f362c33e668317c8f6f40e50c24230b832f2a226193bf717c5e20519920f9a9f
	Nov 15 10:35:09 embed-certs-719574 kubelet[1468]: W1115 10:35:09.719494    1468 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/77b854d73395a6882ad846c6f51d5c84a63e9af66dc48bfe4bdf582432e5fa0b/crio-0b801da817dbbce37e738cae362edde34bb6ffda09b83c7929158aa15dabd06b WatchSource:0}: Error finding container 0b801da817dbbce37e738cae362edde34bb6ffda09b83c7929158aa15dabd06b: Status 404 returned error can't find the container with id 0b801da817dbbce37e738cae362edde34bb6ffda09b83c7929158aa15dabd06b
	Nov 15 10:35:10 embed-certs-719574 kubelet[1468]: I1115 10:35:10.455027    1468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kmc8c" podStartSLOduration=2.455004362 podStartE2EDuration="2.455004362s" podCreationTimestamp="2025-11-15 10:35:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:35:10.443832454 +0000 UTC m=+7.181180181" watchObservedRunningTime="2025-11-15 10:35:10.455004362 +0000 UTC m=+7.192352086"
	Nov 15 10:35:10 embed-certs-719574 kubelet[1468]: I1115 10:35:10.465240    1468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-ql2r4" podStartSLOduration=2.4652153820000002 podStartE2EDuration="2.465215382s" podCreationTimestamp="2025-11-15 10:35:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:35:10.465064249 +0000 UTC m=+7.202411975" watchObservedRunningTime="2025-11-15 10:35:10.465215382 +0000 UTC m=+7.202563107"
	Nov 15 10:35:20 embed-certs-719574 kubelet[1468]: I1115 10:35:20.469607    1468 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 15 10:35:20 embed-certs-719574 kubelet[1468]: I1115 10:35:20.602688    1468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pddtz\" (UniqueName: \"kubernetes.io/projected/39c3baf2-24de-475e-aeef-a10825991ca3-kube-api-access-pddtz\") pod \"storage-provisioner\" (UID: \"39c3baf2-24de-475e-aeef-a10825991ca3\") " pod="kube-system/storage-provisioner"
	Nov 15 10:35:20 embed-certs-719574 kubelet[1468]: I1115 10:35:20.602766    1468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/39c3baf2-24de-475e-aeef-a10825991ca3-tmp\") pod \"storage-provisioner\" (UID: \"39c3baf2-24de-475e-aeef-a10825991ca3\") " pod="kube-system/storage-provisioner"
	Nov 15 10:35:20 embed-certs-719574 kubelet[1468]: I1115 10:35:20.602792    1468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4d185bc-88ec-4edb-b250-6a59ee426bf5-config-volume\") pod \"coredns-66bc5c9577-fjzk5\" (UID: \"d4d185bc-88ec-4edb-b250-6a59ee426bf5\") " pod="kube-system/coredns-66bc5c9577-fjzk5"
	Nov 15 10:35:20 embed-certs-719574 kubelet[1468]: I1115 10:35:20.602822    1468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slxkc\" (UniqueName: \"kubernetes.io/projected/d4d185bc-88ec-4edb-b250-6a59ee426bf5-kube-api-access-slxkc\") pod \"coredns-66bc5c9577-fjzk5\" (UID: \"d4d185bc-88ec-4edb-b250-6a59ee426bf5\") " pod="kube-system/coredns-66bc5c9577-fjzk5"
	Nov 15 10:35:20 embed-certs-719574 kubelet[1468]: W1115 10:35:20.812091    1468 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/77b854d73395a6882ad846c6f51d5c84a63e9af66dc48bfe4bdf582432e5fa0b/crio-b8f8d55870ccd75dce80d4d2795409de56d4c3150dd1a422c94ff4398a51517c WatchSource:0}: Error finding container b8f8d55870ccd75dce80d4d2795409de56d4c3150dd1a422c94ff4398a51517c: Status 404 returned error can't find the container with id b8f8d55870ccd75dce80d4d2795409de56d4c3150dd1a422c94ff4398a51517c
	Nov 15 10:35:20 embed-certs-719574 kubelet[1468]: W1115 10:35:20.839228    1468 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/77b854d73395a6882ad846c6f51d5c84a63e9af66dc48bfe4bdf582432e5fa0b/crio-32636a7a619970b596d0dd9df446b6433ec29f17196bf8c52ae7b2390eabeb72 WatchSource:0}: Error finding container 32636a7a619970b596d0dd9df446b6433ec29f17196bf8c52ae7b2390eabeb72: Status 404 returned error can't find the container with id 32636a7a619970b596d0dd9df446b6433ec29f17196bf8c52ae7b2390eabeb72
	Nov 15 10:35:21 embed-certs-719574 kubelet[1468]: I1115 10:35:21.472230    1468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-fjzk5" podStartSLOduration=12.472207955 podStartE2EDuration="12.472207955s" podCreationTimestamp="2025-11-15 10:35:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:35:21.470914747 +0000 UTC m=+18.208262472" watchObservedRunningTime="2025-11-15 10:35:21.472207955 +0000 UTC m=+18.209555680"
	Nov 15 10:35:21 embed-certs-719574 kubelet[1468]: I1115 10:35:21.498434    1468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.498407603 podStartE2EDuration="11.498407603s" podCreationTimestamp="2025-11-15 10:35:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:35:21.483540323 +0000 UTC m=+18.220888048" watchObservedRunningTime="2025-11-15 10:35:21.498407603 +0000 UTC m=+18.235755328"
	Nov 15 10:35:23 embed-certs-719574 kubelet[1468]: I1115 10:35:23.926159    1468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdwz5\" (UniqueName: \"kubernetes.io/projected/dc1f55ba-efa8-4823-b9a0-0c2cd11a020d-kube-api-access-hdwz5\") pod \"busybox\" (UID: \"dc1f55ba-efa8-4823-b9a0-0c2cd11a020d\") " pod="default/busybox"
	Nov 15 10:35:24 embed-certs-719574 kubelet[1468]: W1115 10:35:24.150554    1468 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/77b854d73395a6882ad846c6f51d5c84a63e9af66dc48bfe4bdf582432e5fa0b/crio-a9532dc8faadbd4ca07ea2c1ff986d91b0012d0056918a6ac03d26b9638d1986 WatchSource:0}: Error finding container a9532dc8faadbd4ca07ea2c1ff986d91b0012d0056918a6ac03d26b9638d1986: Status 404 returned error can't find the container with id a9532dc8faadbd4ca07ea2c1ff986d91b0012d0056918a6ac03d26b9638d1986
	Nov 15 10:35:29 embed-certs-719574 kubelet[1468]: I1115 10:35:29.493257    1468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.180392045 podStartE2EDuration="6.493232019s" podCreationTimestamp="2025-11-15 10:35:23 +0000 UTC" firstStartedPulling="2025-11-15 10:35:24.153411744 +0000 UTC m=+20.890759461" lastFinishedPulling="2025-11-15 10:35:28.466251713 +0000 UTC m=+25.203599435" observedRunningTime="2025-11-15 10:35:29.492817148 +0000 UTC m=+26.230164857" watchObservedRunningTime="2025-11-15 10:35:29.493232019 +0000 UTC m=+26.230579738"
	
	
	==> storage-provisioner [0267ac3cbc52f295545c4058d865aa7cabc7bb552488ca035a5a5549c5dba960] <==
	I1115 10:35:20.881307       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 10:35:20.890603       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 10:35:20.890666       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1115 10:35:20.893058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:20.898441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:35:20.898588       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 10:35:20.898766       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-719574_6c307bc9-944f-42d8-830e-442dcb498d05!
	I1115 10:35:20.898751       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"40f0e3ae-7c7f-492f-ba67-375413ad6bff", APIVersion:"v1", ResourceVersion:"448", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-719574_6c307bc9-944f-42d8-830e-442dcb498d05 became leader
	W1115 10:35:20.902046       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:20.905507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:35:20.999341       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-719574_6c307bc9-944f-42d8-830e-442dcb498d05!
	W1115 10:35:22.909089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:22.913942       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:24.917298       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:24.921347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:26.924509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:26.928494       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:28.931099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:28.934756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:30.938737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:30.942810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:32.946939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:32.950656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:34.954482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:34.958850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-719574 -n embed-certs-719574
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-719574 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-087235 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-087235 --alsologtostderr -v=1: exit status 80 (1.469939196s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-087235 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:35:43.438468  375871 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:35:43.438818  375871 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:35:43.438831  375871 out.go:374] Setting ErrFile to fd 2...
	I1115 10:35:43.438839  375871 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:35:43.439141  375871 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 10:35:43.439407  375871 out.go:368] Setting JSON to false
	I1115 10:35:43.439468  375871 mustload.go:66] Loading cluster: old-k8s-version-087235
	I1115 10:35:43.439837  375871 config.go:182] Loaded profile config "old-k8s-version-087235": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1115 10:35:43.440288  375871 cli_runner.go:164] Run: docker container inspect old-k8s-version-087235 --format={{.State.Status}}
	I1115 10:35:43.458945  375871 host.go:66] Checking if "old-k8s-version-087235" exists ...
	I1115 10:35:43.459262  375871 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:35:43.517609  375871 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:true NGoroutines:88 SystemTime:2025-11-15 10:35:43.506778381 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:35:43.518344  375871 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-087235 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1115 10:35:43.520375  375871 out.go:179] * Pausing node old-k8s-version-087235 ... 
	I1115 10:35:43.521575  375871 host.go:66] Checking if "old-k8s-version-087235" exists ...
	I1115 10:35:43.521823  375871 ssh_runner.go:195] Run: systemctl --version
	I1115 10:35:43.521859  375871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-087235
	I1115 10:35:43.540452  375871 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/old-k8s-version-087235/id_rsa Username:docker}
	I1115 10:35:43.634705  375871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:35:43.660793  375871 pause.go:52] kubelet running: true
	I1115 10:35:43.660910  375871 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:35:43.815840  375871 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:35:43.815948  375871 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:35:43.887438  375871 cri.go:89] found id: "141d480cf3b64b6bc24f8f5013f9a931686b80ed7bf8b12a85bcd2b351953257"
	I1115 10:35:43.887461  375871 cri.go:89] found id: "0e4febf6eeb916f0992d7e320785e3dbfccc6cfc0e69f63884d452c516e43258"
	I1115 10:35:43.887465  375871 cri.go:89] found id: "034573ebc531040d6466ecf78c8b86fefe56032a558c0c6e459de1608b9d81f5"
	I1115 10:35:43.887469  375871 cri.go:89] found id: "7594c7c2d610745a399557dd1247f6642b08937e57147358c301470340e5bbb3"
	I1115 10:35:43.887472  375871 cri.go:89] found id: "ba78e319d11c588a26d306264073a90262f5ec5da127e677e9bdbe733738df60"
	I1115 10:35:43.887475  375871 cri.go:89] found id: "b8b1ccd6451f4579f89a5a5b4368b0f6ed96c45d344cd9110c94b49fdceb39ed"
	I1115 10:35:43.887477  375871 cri.go:89] found id: "8ce75f5e9ad57aaaace9af39da481c138fb57073d1fee7bc88e75f67b8b6e7f7"
	I1115 10:35:43.887480  375871 cri.go:89] found id: "3fd62a9dd47699ac165f43ff643bf99a6efeeed696c5fdcd642be6b2a9374ff1"
	I1115 10:35:43.887483  375871 cri.go:89] found id: "dabb8b48098068214bdf9584f09c135d2dcdd3d138801a98bbacd77829336d90"
	I1115 10:35:43.887503  375871 cri.go:89] found id: "235ae938bb4114fcf19fc01771df50bcd55d9e0253dd1fc357c036d91b1dbd6d"
	I1115 10:35:43.887506  375871 cri.go:89] found id: "9e7dab2808e72b5ecf4c23f3a0c6c73dc08206c22ebcf5da92da7fd1464ea642"
	I1115 10:35:43.887508  375871 cri.go:89] found id: ""
	I1115 10:35:43.887557  375871 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:35:43.900225  375871 retry.go:31] will retry after 285.237833ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:35:43Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:35:44.185716  375871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:35:44.200101  375871 pause.go:52] kubelet running: false
	I1115 10:35:44.200156  375871 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:35:44.327333  375871 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:35:44.327411  375871 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:35:44.396675  375871 cri.go:89] found id: "141d480cf3b64b6bc24f8f5013f9a931686b80ed7bf8b12a85bcd2b351953257"
	I1115 10:35:44.396700  375871 cri.go:89] found id: "0e4febf6eeb916f0992d7e320785e3dbfccc6cfc0e69f63884d452c516e43258"
	I1115 10:35:44.396704  375871 cri.go:89] found id: "034573ebc531040d6466ecf78c8b86fefe56032a558c0c6e459de1608b9d81f5"
	I1115 10:35:44.396707  375871 cri.go:89] found id: "7594c7c2d610745a399557dd1247f6642b08937e57147358c301470340e5bbb3"
	I1115 10:35:44.396710  375871 cri.go:89] found id: "ba78e319d11c588a26d306264073a90262f5ec5da127e677e9bdbe733738df60"
	I1115 10:35:44.396713  375871 cri.go:89] found id: "b8b1ccd6451f4579f89a5a5b4368b0f6ed96c45d344cd9110c94b49fdceb39ed"
	I1115 10:35:44.396716  375871 cri.go:89] found id: "8ce75f5e9ad57aaaace9af39da481c138fb57073d1fee7bc88e75f67b8b6e7f7"
	I1115 10:35:44.396719  375871 cri.go:89] found id: "3fd62a9dd47699ac165f43ff643bf99a6efeeed696c5fdcd642be6b2a9374ff1"
	I1115 10:35:44.396721  375871 cri.go:89] found id: "dabb8b48098068214bdf9584f09c135d2dcdd3d138801a98bbacd77829336d90"
	I1115 10:35:44.396733  375871 cri.go:89] found id: "235ae938bb4114fcf19fc01771df50bcd55d9e0253dd1fc357c036d91b1dbd6d"
	I1115 10:35:44.396736  375871 cri.go:89] found id: "9e7dab2808e72b5ecf4c23f3a0c6c73dc08206c22ebcf5da92da7fd1464ea642"
	I1115 10:35:44.396739  375871 cri.go:89] found id: ""
	I1115 10:35:44.396779  375871 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:35:44.408992  375871 retry.go:31] will retry after 198.549218ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:35:44Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:35:44.608475  375871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:35:44.621755  375871 pause.go:52] kubelet running: false
	I1115 10:35:44.621817  375871 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:35:44.757279  375871 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:35:44.757384  375871 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:35:44.824580  375871 cri.go:89] found id: "141d480cf3b64b6bc24f8f5013f9a931686b80ed7bf8b12a85bcd2b351953257"
	I1115 10:35:44.824608  375871 cri.go:89] found id: "0e4febf6eeb916f0992d7e320785e3dbfccc6cfc0e69f63884d452c516e43258"
	I1115 10:35:44.824615  375871 cri.go:89] found id: "034573ebc531040d6466ecf78c8b86fefe56032a558c0c6e459de1608b9d81f5"
	I1115 10:35:44.824620  375871 cri.go:89] found id: "7594c7c2d610745a399557dd1247f6642b08937e57147358c301470340e5bbb3"
	I1115 10:35:44.824624  375871 cri.go:89] found id: "ba78e319d11c588a26d306264073a90262f5ec5da127e677e9bdbe733738df60"
	I1115 10:35:44.824630  375871 cri.go:89] found id: "b8b1ccd6451f4579f89a5a5b4368b0f6ed96c45d344cd9110c94b49fdceb39ed"
	I1115 10:35:44.824634  375871 cri.go:89] found id: "8ce75f5e9ad57aaaace9af39da481c138fb57073d1fee7bc88e75f67b8b6e7f7"
	I1115 10:35:44.824639  375871 cri.go:89] found id: "3fd62a9dd47699ac165f43ff643bf99a6efeeed696c5fdcd642be6b2a9374ff1"
	I1115 10:35:44.824643  375871 cri.go:89] found id: "dabb8b48098068214bdf9584f09c135d2dcdd3d138801a98bbacd77829336d90"
	I1115 10:35:44.824651  375871 cri.go:89] found id: "235ae938bb4114fcf19fc01771df50bcd55d9e0253dd1fc357c036d91b1dbd6d"
	I1115 10:35:44.824655  375871 cri.go:89] found id: "9e7dab2808e72b5ecf4c23f3a0c6c73dc08206c22ebcf5da92da7fd1464ea642"
	I1115 10:35:44.824659  375871 cri.go:89] found id: ""
	I1115 10:35:44.824706  375871 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:35:44.838698  375871 out.go:203] 
	W1115 10:35:44.839810  375871 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:35:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:35:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 10:35:44.839835  375871 out.go:285] * 
	* 
	W1115 10:35:44.845248  375871 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 10:35:44.846405  375871 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-087235 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-087235
helpers_test.go:243: (dbg) docker inspect old-k8s-version-087235:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3d4715b4872d2f4603f13f0191a01c4322a26432e1316c2ae62b1c571bea1814",
	        "Created": "2025-11-15T10:33:24.829295884Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 361917,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:34:42.560305954Z",
	            "FinishedAt": "2025-11-15T10:34:41.544966298Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/3d4715b4872d2f4603f13f0191a01c4322a26432e1316c2ae62b1c571bea1814/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3d4715b4872d2f4603f13f0191a01c4322a26432e1316c2ae62b1c571bea1814/hostname",
	        "HostsPath": "/var/lib/docker/containers/3d4715b4872d2f4603f13f0191a01c4322a26432e1316c2ae62b1c571bea1814/hosts",
	        "LogPath": "/var/lib/docker/containers/3d4715b4872d2f4603f13f0191a01c4322a26432e1316c2ae62b1c571bea1814/3d4715b4872d2f4603f13f0191a01c4322a26432e1316c2ae62b1c571bea1814-json.log",
	        "Name": "/old-k8s-version-087235",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-087235:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-087235",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3d4715b4872d2f4603f13f0191a01c4322a26432e1316c2ae62b1c571bea1814",
	                "LowerDir": "/var/lib/docker/overlay2/2a7fcfc651315e1f255f5887dcd49b9489c6d9bc0d5bb2729e48f789bfad9233-init/diff:/var/lib/docker/overlay2/507c85b6fb43d9c98216fac79d7a8d08dd20b2a63d0fbfc758336e5d38c04044/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2a7fcfc651315e1f255f5887dcd49b9489c6d9bc0d5bb2729e48f789bfad9233/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2a7fcfc651315e1f255f5887dcd49b9489c6d9bc0d5bb2729e48f789bfad9233/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2a7fcfc651315e1f255f5887dcd49b9489c6d9bc0d5bb2729e48f789bfad9233/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-087235",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-087235/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-087235",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-087235",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-087235",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "9e46ad3d7d257a4acedaacae202f5c7e5ff342db3043ae0b762b3eb0dc67b0c9",
	            "SandboxKey": "/var/run/docker/netns/9e46ad3d7d25",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-087235": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "11bae6d0a5454f5603cad7765ca7366f9be46b927618f2c698dc454d778aa49c",
	                    "EndpointID": "74e26a0798dfa3498a4af2e39ad2b821ec1833feae7cd7a3eda4e27e4faa8c71",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "86:2c:ba:e2:e0:26",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-087235",
	                        "3d4715b4872d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-087235 -n old-k8s-version-087235
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-087235 -n old-k8s-version-087235: exit status 2 (331.077461ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-087235 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-087235 logs -n 25: (1.145027282s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-931243 sudo cat /etc/docker/daemon.json                                                                                                                        │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ ssh     │ -p bridge-931243 sudo docker system info                                                                                                                                 │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ ssh     │ -p bridge-931243 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ ssh     │ -p bridge-931243 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ ssh     │ -p bridge-931243 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo cri-dockerd --version                                                                                                                              │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ ssh     │ -p bridge-931243 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo containerd config dump                                                                                                                             │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo crio config                                                                                                                                        │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ delete  │ -p bridge-931243                                                                                                                                                         │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ delete  │ -p disable-driver-mounts-435527                                                                                                                                          │ disable-driver-mounts-435527 │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ start   │ -p default-k8s-diff-port-026691 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-026691 │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-283677 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ start   │ -p no-preload-283677 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-719574 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ stop    │ -p embed-certs-719574 --alsologtostderr -v=3                                                                                                                             │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ image   │ old-k8s-version-087235 image list --format=json                                                                                                                          │ old-k8s-version-087235       │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ pause   │ -p old-k8s-version-087235 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-087235       │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:34:57
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:34:57.108674  368849 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:34:57.109040  368849 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:34:57.109051  368849 out.go:374] Setting ErrFile to fd 2...
	I1115 10:34:57.109058  368849 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:34:57.111080  368849 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 10:34:57.111766  368849 out.go:368] Setting JSON to false
	I1115 10:34:57.113998  368849 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8234,"bootTime":1763194663,"procs":296,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 10:34:57.114136  368849 start.go:143] virtualization: kvm guest
	I1115 10:34:57.115948  368849 out.go:179] * [no-preload-283677] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 10:34:57.117523  368849 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 10:34:57.117555  368849 notify.go:221] Checking for updates...
	I1115 10:34:57.119869  368849 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:34:57.121118  368849 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:34:57.122183  368849 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-55448/.minikube
	I1115 10:34:57.123828  368849 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 10:34:57.125045  368849 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:34:57.127033  368849 config.go:182] Loaded profile config "no-preload-283677": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:34:57.127935  368849 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:34:57.156939  368849 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 10:34:57.157094  368849 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:34:57.240931  368849 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:true NGoroutines:79 SystemTime:2025-11-15 10:34:57.228600984 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:34:57.241107  368849 docker.go:319] overlay module found
	I1115 10:34:57.243006  368849 out.go:179] * Using the docker driver based on existing profile
	I1115 10:34:56.682396  361423 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1115 10:34:56.682754  361423 addons.go:515] duration metric: took 6.415772773s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1115 10:34:56.684325  361423 api_server.go:141] control plane version: v1.28.0
	I1115 10:34:56.684354  361423 api_server.go:131] duration metric: took 8.788317ms to wait for apiserver health ...
	I1115 10:34:56.684364  361423 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:34:56.690921  361423 system_pods.go:59] 8 kube-system pods found
	I1115 10:34:56.691034  361423 system_pods.go:61] "coredns-5dd5756b68-bdpfv" [f9b5c9c2-d642-4a22-890d-89a8f91f771b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:34:56.691127  361423 system_pods.go:61] "etcd-old-k8s-version-087235" [ad6ea576-0a1a-4a8a-96c2-3076229520e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:34:56.691149  361423 system_pods.go:61] "kindnet-7btvm" [40ac7700-b07d-4504-8532-414d2fab7395] Running
	I1115 10:34:56.691158  361423 system_pods.go:61] "kube-apiserver-old-k8s-version-087235" [d035a8eb-e4b4-4eae-8726-c29fa6059168] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:34:56.691166  361423 system_pods.go:61] "kube-controller-manager-old-k8s-version-087235" [9cc04644-cddd-4b9c-964c-826ee56dbc47] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:34:56.691172  361423 system_pods.go:61] "kube-proxy-gl22j" [a854c189-3bd6-4c7d-8160-ae11b35db003] Running
	I1115 10:34:56.691179  361423 system_pods.go:61] "kube-scheduler-old-k8s-version-087235" [1da33e9b-b3a7-4e2b-b951-81bfc76bb515] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:34:56.691184  361423 system_pods.go:61] "storage-provisioner" [f2e47bd9-5a00-47cd-9b2e-5b80244c04a1] Running
	I1115 10:34:56.691199  361423 system_pods.go:74] duration metric: took 6.828122ms to wait for pod list to return data ...
	I1115 10:34:56.691207  361423 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:34:56.695797  361423 default_sa.go:45] found service account: "default"
	I1115 10:34:56.695993  361423 default_sa.go:55] duration metric: took 4.775405ms for default service account to be created ...
	I1115 10:34:56.696009  361423 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:34:56.706900  361423 system_pods.go:86] 8 kube-system pods found
	I1115 10:34:56.706946  361423 system_pods.go:89] "coredns-5dd5756b68-bdpfv" [f9b5c9c2-d642-4a22-890d-89a8f91f771b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:34:56.707061  361423 system_pods.go:89] "etcd-old-k8s-version-087235" [ad6ea576-0a1a-4a8a-96c2-3076229520e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:34:56.707075  361423 system_pods.go:89] "kindnet-7btvm" [40ac7700-b07d-4504-8532-414d2fab7395] Running
	I1115 10:34:56.707086  361423 system_pods.go:89] "kube-apiserver-old-k8s-version-087235" [d035a8eb-e4b4-4eae-8726-c29fa6059168] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:34:56.707148  361423 system_pods.go:89] "kube-controller-manager-old-k8s-version-087235" [9cc04644-cddd-4b9c-964c-826ee56dbc47] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:34:56.707168  361423 system_pods.go:89] "kube-proxy-gl22j" [a854c189-3bd6-4c7d-8160-ae11b35db003] Running
	I1115 10:34:56.707188  361423 system_pods.go:89] "kube-scheduler-old-k8s-version-087235" [1da33e9b-b3a7-4e2b-b951-81bfc76bb515] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:34:56.707217  361423 system_pods.go:89] "storage-provisioner" [f2e47bd9-5a00-47cd-9b2e-5b80244c04a1] Running
	I1115 10:34:56.707230  361423 system_pods.go:126] duration metric: took 11.211997ms to wait for k8s-apps to be running ...
	I1115 10:34:56.707238  361423 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:34:56.707321  361423 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:34:56.739287  361423 system_svc.go:56] duration metric: took 32.035692ms WaitForService to wait for kubelet
	I1115 10:34:56.739406  361423 kubeadm.go:587] duration metric: took 6.472459641s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:34:56.739438  361423 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:34:56.744554  361423 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:34:56.744591  361423 node_conditions.go:123] node cpu capacity is 8
	I1115 10:34:56.744610  361423 node_conditions.go:105] duration metric: took 5.164463ms to run NodePressure ...
	I1115 10:34:56.744623  361423 start.go:242] waiting for startup goroutines ...
	I1115 10:34:56.744633  361423 start.go:247] waiting for cluster config update ...
	I1115 10:34:56.744648  361423 start.go:256] writing updated cluster config ...
	I1115 10:34:56.744949  361423 ssh_runner.go:195] Run: rm -f paused
	I1115 10:34:56.752666  361423 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:34:56.758416  361423 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-bdpfv" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:57.244155  368849 start.go:309] selected driver: docker
	I1115 10:34:57.244180  368849 start.go:930] validating driver "docker" against &{Name:no-preload-283677 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-283677 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:34:57.244301  368849 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:34:57.245328  368849 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:34:57.321410  368849 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:true NGoroutines:79 SystemTime:2025-11-15 10:34:57.3090885 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be removed
by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:34:57.321759  368849 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:34:57.321796  368849 cni.go:84] Creating CNI manager for ""
	I1115 10:34:57.321849  368849 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:34:57.321897  368849 start.go:353] cluster config:
	{Name:no-preload-283677 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-283677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:34:57.324353  368849 out.go:179] * Starting "no-preload-283677" primary control-plane node in "no-preload-283677" cluster
	I1115 10:34:57.325413  368849 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:34:57.326593  368849 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:34:57.327877  368849 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:34:57.327926  368849 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:34:57.328103  368849 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/config.json ...
	I1115 10:34:57.328512  368849 cache.go:107] acquiring lock: {Name:mk04e19ef4726336e87a2efa989ec89b11194587 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:34:57.328600  368849 cache.go:115] /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1115 10:34:57.328611  368849 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 116.56µs
	I1115 10:34:57.328622  368849 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1115 10:34:57.328638  368849 cache.go:107] acquiring lock: {Name:mk160c40720b01bd77226b9ee86c8a56493b3987 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:34:57.328681  368849 cache.go:115] /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1115 10:34:57.328688  368849 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 53.964µs
	I1115 10:34:57.328696  368849 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1115 10:34:57.328709  368849 cache.go:107] acquiring lock: {Name:mk568a3320f172c7702e0c64f82e9ab66f08dc56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:34:57.328745  368849 cache.go:115] /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1115 10:34:57.328753  368849 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 45.66µs
	I1115 10:34:57.328760  368849 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1115 10:34:57.328772  368849 cache.go:107] acquiring lock: {Name:mk4538f0a5ff75ff8439835bfd59d64a365cd71b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:34:57.328806  368849 cache.go:115] /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1115 10:34:57.328812  368849 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 42.3µs
	I1115 10:34:57.328820  368849 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1115 10:34:57.328842  368849 cache.go:107] acquiring lock: {Name:mkebd0527ca8cd5425c0189738c4c613b1d0ad77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:34:57.328878  368849 cache.go:115] /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1115 10:34:57.328884  368849 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 55.883µs
	I1115 10:34:57.328893  368849 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1115 10:34:57.329374  368849 cache.go:107] acquiring lock: {Name:mk5c9d9d1f91519c0468e055d96da9be78d8987d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:34:57.329494  368849 cache.go:115] /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1115 10:34:57.329505  368849 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 157µs
	I1115 10:34:57.329514  368849 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1115 10:34:57.329533  368849 cache.go:107] acquiring lock: {Name:mk6d25d7926738a8037e85ed094d1b802d5c1f77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:34:57.329577  368849 cache.go:115] /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1115 10:34:57.329583  368849 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 53.182µs
	I1115 10:34:57.329591  368849 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1115 10:34:57.329625  368849 cache.go:107] acquiring lock: {Name:mkc6ed1fa15fd637355ac953d6d06e91f3f34a59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:34:57.329680  368849 cache.go:115] /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1115 10:34:57.329687  368849 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 65.791µs
	I1115 10:34:57.329700  368849 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1115 10:34:57.329724  368849 cache.go:87] Successfully saved all images to host disk.
	I1115 10:34:57.355013  368849 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:34:57.355036  368849 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:34:57.355056  368849 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:34:57.355084  368849 start.go:360] acquireMachinesLock for no-preload-283677: {Name:mk8d9dc816de84055c03b404ddcac096c332be5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:34:57.355145  368849 start.go:364] duration metric: took 42.843µs to acquireMachinesLock for "no-preload-283677"
	I1115 10:34:57.355165  368849 start.go:96] Skipping create...Using existing machine configuration
	I1115 10:34:57.355174  368849 fix.go:54] fixHost starting: 
	I1115 10:34:57.355455  368849 cli_runner.go:164] Run: docker container inspect no-preload-283677 --format={{.State.Status}}
	I1115 10:34:57.375065  368849 fix.go:112] recreateIfNeeded on no-preload-283677: state=Stopped err=<nil>
	W1115 10:34:57.375094  368849 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 10:34:52.640072  367608 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 10:34:52.641977  367608 start.go:159] libmachine.API.Create for "default-k8s-diff-port-026691" (driver="docker")
	I1115 10:34:52.642026  367608 client.go:173] LocalClient.Create starting
	I1115 10:34:52.642126  367608 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem
	I1115 10:34:52.642171  367608 main.go:143] libmachine: Decoding PEM data...
	I1115 10:34:52.642193  367608 main.go:143] libmachine: Parsing certificate...
	I1115 10:34:52.642275  367608 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem
	I1115 10:34:52.642302  367608 main.go:143] libmachine: Decoding PEM data...
	I1115 10:34:52.642316  367608 main.go:143] libmachine: Parsing certificate...
	I1115 10:34:52.642807  367608 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-026691 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 10:34:52.663735  367608 cli_runner.go:211] docker network inspect default-k8s-diff-port-026691 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 10:34:52.663801  367608 network_create.go:284] running [docker network inspect default-k8s-diff-port-026691] to gather additional debugging logs...
	I1115 10:34:52.663820  367608 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-026691
	W1115 10:34:52.681651  367608 cli_runner.go:211] docker network inspect default-k8s-diff-port-026691 returned with exit code 1
	I1115 10:34:52.681682  367608 network_create.go:287] error running [docker network inspect default-k8s-diff-port-026691]: docker network inspect default-k8s-diff-port-026691: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-026691 not found
	I1115 10:34:52.681694  367608 network_create.go:289] output of [docker network inspect default-k8s-diff-port-026691]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-026691 not found
	
	** /stderr **
	I1115 10:34:52.681815  367608 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:34:52.703576  367608 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-77644897380e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:42:f9:e2:83:c3:91} reservation:<nil>}
	I1115 10:34:52.704399  367608 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e808d03b2052 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:03:24:ef:92:a1} reservation:<nil>}
	I1115 10:34:52.705358  367608 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3fb45eeaa66e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:1a:b3:b7:0f:2e:99} reservation:<nil>}
	I1115 10:34:52.706067  367608 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-31f43b806931 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:0a:cc:8c:d8:0d:c5} reservation:<nil>}
	I1115 10:34:52.707182  367608 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ec6c60}
	I1115 10:34:52.707213  367608 network_create.go:124] attempt to create docker network default-k8s-diff-port-026691 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1115 10:34:52.707274  367608 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-026691 default-k8s-diff-port-026691
	I1115 10:34:52.763872  367608 network_create.go:108] docker network default-k8s-diff-port-026691 192.168.85.0/24 created
	I1115 10:34:52.763908  367608 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-026691" container
	I1115 10:34:52.764001  367608 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 10:34:52.794341  367608 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-026691 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-026691 --label created_by.minikube.sigs.k8s.io=true
	I1115 10:34:52.814745  367608 oci.go:103] Successfully created a docker volume default-k8s-diff-port-026691
	I1115 10:34:52.814828  367608 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-026691-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-026691 --entrypoint /usr/bin/test -v default-k8s-diff-port-026691:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 10:34:53.252498  367608 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-026691
	I1115 10:34:53.252579  367608 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:34:53.252594  367608 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 10:34:53.252663  367608 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-026691:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1115 10:34:56.654774  367608 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-026691:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.402061031s)
	I1115 10:34:56.654813  367608 kic.go:203] duration metric: took 3.402214691s to extract preloaded images to volume ...
	W1115 10:34:56.654990  367608 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1115 10:34:56.655155  367608 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 10:34:56.764857  367608 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-026691 --name default-k8s-diff-port-026691 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-026691 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-026691 --network default-k8s-diff-port-026691 --ip 192.168.85.2 --volume default-k8s-diff-port-026691:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 10:34:57.094021  367608 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Running}}
	I1115 10:34:57.121300  367608 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:34:57.147203  367608 cli_runner.go:164] Run: docker exec default-k8s-diff-port-026691 stat /var/lib/dpkg/alternatives/iptables
	I1115 10:34:57.208529  367608 oci.go:144] the created container "default-k8s-diff-port-026691" has a running status.
	I1115 10:34:57.208578  367608 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa...
	I1115 10:34:54.186226  358343 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.814123ms
	I1115 10:34:54.189071  358343 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 10:34:54.189208  358343 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1115 10:34:54.189338  358343 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 10:34:54.189440  358343 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1115 10:34:57.855035  367608 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 10:34:57.883874  367608 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:34:57.907435  367608 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 10:34:57.907455  367608 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-026691 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 10:34:57.965903  367608 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:34:57.988026  367608 machine.go:94] provisionDockerMachine start ...
	I1115 10:34:57.988137  367608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:34:58.012542  367608 main.go:143] libmachine: Using SSH client type: native
	I1115 10:34:58.012924  367608 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1115 10:34:58.012944  367608 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:34:58.159148  367608 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-026691
	
	I1115 10:34:58.159194  367608 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-026691"
	I1115 10:34:58.159277  367608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:34:58.189206  367608 main.go:143] libmachine: Using SSH client type: native
	I1115 10:34:58.189501  367608 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1115 10:34:58.189523  367608 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-026691 && echo "default-k8s-diff-port-026691" | sudo tee /etc/hostname
	I1115 10:34:58.348350  367608 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-026691
	
	I1115 10:34:58.348454  367608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:34:58.368199  367608 main.go:143] libmachine: Using SSH client type: native
	I1115 10:34:58.368410  367608 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1115 10:34:58.368430  367608 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-026691' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-026691/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-026691' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:34:58.503716  367608 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:34:58.503754  367608 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-55448/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-55448/.minikube}
	I1115 10:34:58.503778  367608 ubuntu.go:190] setting up certificates
	I1115 10:34:58.503791  367608 provision.go:84] configureAuth start
	I1115 10:34:58.503853  367608 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-026691
	I1115 10:34:58.522763  367608 provision.go:143] copyHostCerts
	I1115 10:34:58.522820  367608 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem, removing ...
	I1115 10:34:58.522830  367608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem
	I1115 10:34:58.522904  367608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem (1082 bytes)
	I1115 10:34:58.523027  367608 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem, removing ...
	I1115 10:34:58.523038  367608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem
	I1115 10:34:58.523078  367608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem (1123 bytes)
	I1115 10:34:58.523158  367608 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem, removing ...
	I1115 10:34:58.523169  367608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem
	I1115 10:34:58.523203  367608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem (1679 bytes)
	I1115 10:34:58.523272  367608 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-026691 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-026691 localhost minikube]
	I1115 10:34:58.590090  367608 provision.go:177] copyRemoteCerts
	I1115 10:34:58.590145  367608 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:34:58.590187  367608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:34:58.608644  367608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:34:58.703764  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:34:58.724559  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1115 10:34:58.742665  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:34:58.759994  367608 provision.go:87] duration metric: took 256.187247ms to configureAuth
	I1115 10:34:58.760028  367608 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:34:58.760213  367608 config.go:182] Loaded profile config "default-k8s-diff-port-026691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:34:58.760342  367608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:34:58.778722  367608 main.go:143] libmachine: Using SSH client type: native
	I1115 10:34:58.779014  367608 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1115 10:34:58.779041  367608 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:34:59.033178  367608 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:34:59.033211  367608 machine.go:97] duration metric: took 1.045153146s to provisionDockerMachine
	I1115 10:34:59.033226  367608 client.go:176] duration metric: took 6.391191213s to LocalClient.Create
	I1115 10:34:59.033253  367608 start.go:167] duration metric: took 6.391304318s to libmachine.API.Create "default-k8s-diff-port-026691"
	I1115 10:34:59.033266  367608 start.go:293] postStartSetup for "default-k8s-diff-port-026691" (driver="docker")
	I1115 10:34:59.033285  367608 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:34:59.033376  367608 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:34:59.033438  367608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:34:59.053944  367608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:34:59.157205  367608 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:34:59.161685  367608 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:34:59.161717  367608 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:34:59.161733  367608 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/addons for local assets ...
	I1115 10:34:59.161795  367608 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/files for local assets ...
	I1115 10:34:59.161913  367608 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem -> 589622.pem in /etc/ssl/certs
	I1115 10:34:59.162069  367608 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:34:59.171183  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:34:59.197319  367608 start.go:296] duration metric: took 164.030813ms for postStartSetup
	I1115 10:34:59.197664  367608 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-026691
	I1115 10:34:59.222158  367608 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/config.json ...
	I1115 10:34:59.222456  367608 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:34:59.222508  367608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:34:59.245172  367608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:34:59.338333  367608 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:34:59.342944  367608 start.go:128] duration metric: took 6.710898676s to createHost
	I1115 10:34:59.342984  367608 start.go:83] releasing machines lock for "default-k8s-diff-port-026691", held for 6.711262903s
	I1115 10:34:59.343053  367608 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-026691
	I1115 10:34:59.360891  367608 ssh_runner.go:195] Run: cat /version.json
	I1115 10:34:59.360960  367608 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:34:59.360981  367608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:34:59.361027  367608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:34:59.380703  367608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:34:59.381093  367608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:34:59.543341  367608 ssh_runner.go:195] Run: systemctl --version
	I1115 10:34:59.550150  367608 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:34:59.588663  367608 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:34:59.594351  367608 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:34:59.594425  367608 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:34:59.627965  367608 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1115 10:34:59.627992  367608 start.go:496] detecting cgroup driver to use...
	I1115 10:34:59.628030  367608 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:34:59.628089  367608 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:34:59.644582  367608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:34:59.656945  367608 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:34:59.657016  367608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:34:59.673964  367608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:34:59.698909  367608 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:34:59.793897  367608 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:34:59.897920  367608 docker.go:234] disabling docker service ...
	I1115 10:34:59.898017  367608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:34:59.921681  367608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:34:59.935475  367608 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:35:00.040217  367608 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:35:00.145087  367608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:35:00.157908  367608 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:35:00.172301  367608 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:35:00.172359  367608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:00.185532  367608 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:35:00.185603  367608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:00.195014  367608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:00.204978  367608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:00.216321  367608 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:35:00.224805  367608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:00.233598  367608 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:00.248215  367608 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:00.257523  367608 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:35:00.265789  367608 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:35:00.273509  367608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:00.370097  367608 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:35:00.480383  367608 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:35:00.480459  367608 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:35:00.484506  367608 start.go:564] Will wait 60s for crictl version
	I1115 10:35:00.484571  367608 ssh_runner.go:195] Run: which crictl
	I1115 10:35:00.488156  367608 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:35:00.512458  367608 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:35:00.512546  367608 ssh_runner.go:195] Run: crio --version
	I1115 10:35:00.540995  367608 ssh_runner.go:195] Run: crio --version
	I1115 10:35:00.580705  367608 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:34:57.377717  368849 out.go:252] * Restarting existing docker container for "no-preload-283677" ...
	I1115 10:34:57.377792  368849 cli_runner.go:164] Run: docker start no-preload-283677
	I1115 10:34:57.726123  368849 cli_runner.go:164] Run: docker container inspect no-preload-283677 --format={{.State.Status}}
	I1115 10:34:57.753398  368849 kic.go:430] container "no-preload-283677" state is running.
	I1115 10:34:57.753840  368849 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-283677
	I1115 10:34:57.778603  368849 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/config.json ...
	I1115 10:34:57.778940  368849 machine.go:94] provisionDockerMachine start ...
	I1115 10:34:57.779390  368849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:34:57.804369  368849 main.go:143] libmachine: Using SSH client type: native
	I1115 10:34:57.805107  368849 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I1115 10:34:57.805139  368849 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:34:57.806009  368849 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40268->127.0.0.1:33114: read: connection reset by peer
	I1115 10:35:00.948741  368849 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-283677
	
	I1115 10:35:00.948777  368849 ubuntu.go:182] provisioning hostname "no-preload-283677"
	I1115 10:35:00.948835  368849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:35:00.969578  368849 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:00.969832  368849 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I1115 10:35:00.969850  368849 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-283677 && echo "no-preload-283677" | sudo tee /etc/hostname
	I1115 10:35:01.127681  368849 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-283677
	
	I1115 10:35:01.127767  368849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:35:01.146233  368849 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:01.146580  368849 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I1115 10:35:01.146607  368849 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-283677' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-283677/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-283677' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:35:01.284681  368849 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:35:01.284713  368849 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-55448/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-55448/.minikube}
	I1115 10:35:01.284748  368849 ubuntu.go:190] setting up certificates
	I1115 10:35:01.284762  368849 provision.go:84] configureAuth start
	I1115 10:35:01.284822  368849 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-283677
	I1115 10:35:01.303443  368849 provision.go:143] copyHostCerts
	I1115 10:35:01.303518  368849 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem, removing ...
	I1115 10:35:01.303535  368849 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem
	I1115 10:35:01.303611  368849 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem (1123 bytes)
	I1115 10:35:01.303735  368849 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem, removing ...
	I1115 10:35:01.303747  368849 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem
	I1115 10:35:01.303788  368849 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem (1679 bytes)
	I1115 10:35:01.303897  368849 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem, removing ...
	I1115 10:35:01.303909  368849 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem
	I1115 10:35:01.303945  368849 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem (1082 bytes)
	I1115 10:35:01.304057  368849 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem org=jenkins.no-preload-283677 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-283677]
	I1115 10:35:01.479935  368849 provision.go:177] copyRemoteCerts
	I1115 10:35:01.480049  368849 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:35:01.480102  368849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:35:01.499143  368849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/no-preload-283677/id_rsa Username:docker}
	I1115 10:35:01.593407  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:35:01.611444  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1115 10:35:01.629246  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1115 10:35:01.647087  368849 provision.go:87] duration metric: took 362.308284ms to configureAuth
	I1115 10:35:01.647136  368849 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:35:01.647339  368849 config.go:182] Loaded profile config "no-preload-283677": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:01.647467  368849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:35:01.667372  368849 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:01.667673  368849 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I1115 10:35:01.667695  368849 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:35:01.979196  368849 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:35:01.979228  368849 machine.go:97] duration metric: took 4.200198854s to provisionDockerMachine
	I1115 10:35:01.979281  368849 start.go:293] postStartSetup for "no-preload-283677" (driver="docker")
	I1115 10:35:01.979310  368849 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:35:01.979376  368849 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:35:01.979445  368849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:35:02.006457  368849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/no-preload-283677/id_rsa Username:docker}
	W1115 10:34:58.763972  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	W1115 10:35:00.765899  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	I1115 10:35:00.581817  367608 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-026691 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:35:00.607057  367608 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1115 10:35:00.613228  367608 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:35:00.626466  367608 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-026691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-026691 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:35:00.626625  367608 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:35:00.626700  367608 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:35:00.658108  367608 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:35:00.658131  367608 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:35:00.658175  367608 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:35:00.696481  367608 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:35:00.696507  367608 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:35:00.696517  367608 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1115 10:35:00.696629  367608 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-026691 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-026691 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:35:00.696715  367608 ssh_runner.go:195] Run: crio config
	I1115 10:35:00.744746  367608 cni.go:84] Creating CNI manager for ""
	I1115 10:35:00.744772  367608 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:35:00.744791  367608 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:35:00.744814  367608 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-026691 NodeName:default-k8s-diff-port-026691 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:35:00.744945  367608 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-026691"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:35:00.745029  367608 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:35:00.753434  367608 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:35:00.753504  367608 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:35:00.762137  367608 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1115 10:35:00.775671  367608 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:35:00.797030  367608 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1115 10:35:00.815366  367608 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:35:00.819023  367608 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:35:00.829919  367608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:00.924599  367608 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:35:00.946789  367608 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691 for IP: 192.168.85.2
	I1115 10:35:00.946817  367608 certs.go:195] generating shared ca certs ...
	I1115 10:35:00.946839  367608 certs.go:227] acquiring lock for ca certs: {Name:mkec8c0ab2b564f4fafe1f7fc86089efa994f6db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:00.947089  367608 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key
	I1115 10:35:00.947146  367608 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key
	I1115 10:35:00.947160  367608 certs.go:257] generating profile certs ...
	I1115 10:35:00.947236  367608 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/client.key
	I1115 10:35:00.947253  367608 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/client.crt with IP's: []
	I1115 10:35:01.041305  367608 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/client.crt ...
	I1115 10:35:01.041332  367608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/client.crt: {Name:mk850ac752ca8e1bd96e0112fe9cd33d06ae9831 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:01.041557  367608 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/client.key ...
	I1115 10:35:01.041576  367608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/client.key: {Name:mkc9f22f4d08691fb039bf58ca3696be01b8d2da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:01.041712  367608 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.key.f8824eec
	I1115 10:35:01.041737  367608 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.crt.f8824eec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1115 10:35:01.322559  367608 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.crt.f8824eec ...
	I1115 10:35:01.322598  367608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.crt.f8824eec: {Name:mk3e587e72b06a1c3e15f6608c5003fe07edb847 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:01.322844  367608 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.key.f8824eec ...
	I1115 10:35:01.322868  367608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.key.f8824eec: {Name:mka898e08cb25730cf00e76bc5148d21b3cfc491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:01.323013  367608 certs.go:382] copying /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.crt.f8824eec -> /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.crt
	I1115 10:35:01.323157  367608 certs.go:386] copying /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.key.f8824eec -> /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.key
	I1115 10:35:01.323229  367608 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.key
	I1115 10:35:01.323245  367608 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.crt with IP's: []
	I1115 10:35:01.668272  367608 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.crt ...
	I1115 10:35:01.668297  367608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.crt: {Name:mkd2364b507fdcd0e7075f46fb15018bc571dc50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:01.668447  367608 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.key ...
	I1115 10:35:01.668460  367608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.key: {Name:mk25118b0c3511bad3ea017a869823a0d0c461a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:01.668624  367608 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem (1338 bytes)
	W1115 10:35:01.668657  367608 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962_empty.pem, impossibly tiny 0 bytes
	I1115 10:35:01.668665  367608 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:35:01.668690  367608 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:35:01.668714  367608 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:35:01.668735  367608 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem (1679 bytes)
	I1115 10:35:01.668771  367608 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:35:01.669438  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:35:01.688356  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:35:01.706706  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:35:01.726247  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:35:01.748085  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1115 10:35:01.768285  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 10:35:01.788057  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:35:01.809920  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:35:01.831794  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:35:01.856775  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem --> /usr/share/ca-certificates/58962.pem (1338 bytes)
	I1115 10:35:01.878135  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /usr/share/ca-certificates/589622.pem (1708 bytes)
	I1115 10:35:01.900771  367608 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:35:01.917435  367608 ssh_runner.go:195] Run: openssl version
	I1115 10:35:01.925573  367608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:35:01.937193  367608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:35:01.942570  367608 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:41 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:35:01.942644  367608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:35:01.994260  367608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:35:02.006261  367608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/58962.pem && ln -fs /usr/share/ca-certificates/58962.pem /etc/ssl/certs/58962.pem"
	I1115 10:35:02.017141  367608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/58962.pem
	I1115 10:35:02.021709  367608 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:47 /usr/share/ca-certificates/58962.pem
	I1115 10:35:02.021780  367608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/58962.pem
	I1115 10:35:02.067748  367608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/58962.pem /etc/ssl/certs/51391683.0"
	I1115 10:35:02.078280  367608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/589622.pem && ln -fs /usr/share/ca-certificates/589622.pem /etc/ssl/certs/589622.pem"
	I1115 10:35:02.088732  367608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/589622.pem
	I1115 10:35:02.093398  367608 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:47 /usr/share/ca-certificates/589622.pem
	I1115 10:35:02.093499  367608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/589622.pem
	I1115 10:35:02.141627  367608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/589622.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:35:02.152207  367608 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:35:02.157541  367608 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 10:35:02.157606  367608 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-026691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-026691 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:35:02.157707  367608 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:35:02.157765  367608 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:35:02.193432  367608 cri.go:89] found id: ""
	I1115 10:35:02.193509  367608 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:35:02.203886  367608 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 10:35:02.213132  367608 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 10:35:02.213199  367608 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 10:35:02.223642  367608 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 10:35:02.223664  367608 kubeadm.go:158] found existing configuration files:
	
	I1115 10:35:02.223715  367608 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1115 10:35:02.233048  367608 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 10:35:02.233117  367608 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 10:35:02.242878  367608 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1115 10:35:02.252925  367608 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 10:35:02.253017  367608 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 10:35:02.262094  367608 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1115 10:35:02.272394  367608 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 10:35:02.272467  367608 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 10:35:02.282583  367608 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1115 10:35:02.293280  367608 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 10:35:02.293346  367608 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 10:35:02.303662  367608 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 10:35:02.354565  367608 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 10:35:02.354729  367608 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 10:35:02.385123  367608 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 10:35:02.385201  367608 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1115 10:35:02.385231  367608 kubeadm.go:319] OS: Linux
	I1115 10:35:02.385269  367608 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 10:35:02.385308  367608 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1115 10:35:02.385351  367608 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 10:35:02.385393  367608 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 10:35:02.385433  367608 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 10:35:02.385481  367608 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 10:35:02.385522  367608 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 10:35:02.385561  367608 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 10:35:02.385602  367608 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1115 10:35:02.460034  367608 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 10:35:02.460205  367608 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 10:35:02.460365  367608 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 10:35:02.468539  367608 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1115 10:35:00.333123  358343 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 6.143915574s
	I1115 10:35:00.909515  358343 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.720435889s
	I1115 10:35:02.691418  358343 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.502281905s
	I1115 10:35:02.704108  358343 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 10:35:02.720604  358343 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 10:35:02.737329  358343 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 10:35:02.737599  358343 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-719574 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 10:35:02.747326  358343 kubeadm.go:319] [bootstrap-token] Using token: ob95li.bwu5dbqfa14hsvt0
	I1115 10:35:02.110046  368849 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:35:02.114790  368849 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:35:02.114831  368849 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:35:02.114844  368849 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/addons for local assets ...
	I1115 10:35:02.114898  368849 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/files for local assets ...
	I1115 10:35:02.115028  368849 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem -> 589622.pem in /etc/ssl/certs
	I1115 10:35:02.115160  368849 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:35:02.124610  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:35:02.146846  368849 start.go:296] duration metric: took 167.527166ms for postStartSetup
	I1115 10:35:02.146933  368849 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:35:02.147016  368849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:35:02.169248  368849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/no-preload-283677/id_rsa Username:docker}
	I1115 10:35:02.269154  368849 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:35:02.275482  368849 fix.go:56] duration metric: took 4.92029981s for fixHost
	I1115 10:35:02.275512  368849 start.go:83] releasing machines lock for "no-preload-283677", held for 4.920355261s
	I1115 10:35:02.275586  368849 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-283677
	I1115 10:35:02.298638  368849 ssh_runner.go:195] Run: cat /version.json
	I1115 10:35:02.298698  368849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:35:02.298727  368849 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:35:02.298824  368849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:35:02.322717  368849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/no-preload-283677/id_rsa Username:docker}
	I1115 10:35:02.323463  368849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/no-preload-283677/id_rsa Username:docker}
	I1115 10:35:02.488756  368849 ssh_runner.go:195] Run: systemctl --version
	I1115 10:35:02.497019  368849 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:35:02.536446  368849 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:35:02.541399  368849 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:35:02.541491  368849 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:35:02.549838  368849 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 10:35:02.549867  368849 start.go:496] detecting cgroup driver to use...
	I1115 10:35:02.549905  368849 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:35:02.549977  368849 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:35:02.565514  368849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:35:02.577769  368849 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:35:02.577831  368849 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:35:02.592941  368849 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:35:02.605708  368849 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:35:02.688663  368849 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:35:02.788806  368849 docker.go:234] disabling docker service ...
	I1115 10:35:02.788873  368849 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:35:02.807424  368849 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:35:02.823661  368849 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:35:02.915268  368849 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:35:03.000433  368849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:35:03.014052  368849 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:35:03.029226  368849 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:35:03.029290  368849 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:03.038642  368849 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:35:03.038706  368849 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:03.049065  368849 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:03.058622  368849 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:03.068077  368849 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:35:03.076469  368849 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:03.085644  368849 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:03.094454  368849 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:03.104534  368849 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:35:03.112679  368849 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:35:03.121020  368849 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:03.222503  368849 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:35:03.357676  368849 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:35:03.357737  368849 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:35:03.361906  368849 start.go:564] Will wait 60s for crictl version
	I1115 10:35:03.361977  368849 ssh_runner.go:195] Run: which crictl
	I1115 10:35:03.365723  368849 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:35:03.404943  368849 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:35:03.405117  368849 ssh_runner.go:195] Run: crio --version
	I1115 10:35:03.438126  368849 ssh_runner.go:195] Run: crio --version
	I1115 10:35:03.469166  368849 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:35:02.748656  358343 out.go:252]   - Configuring RBAC rules ...
	I1115 10:35:02.748791  358343 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 10:35:02.753461  358343 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 10:35:02.760294  358343 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 10:35:02.763295  358343 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 10:35:02.766433  358343 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 10:35:02.769201  358343 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 10:35:03.098048  358343 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 10:35:03.518284  358343 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 10:35:04.097647  358343 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 10:35:04.098832  358343 kubeadm.go:319] 
	I1115 10:35:04.098915  358343 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 10:35:04.098925  358343 kubeadm.go:319] 
	I1115 10:35:04.099031  358343 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 10:35:04.099041  358343 kubeadm.go:319] 
	I1115 10:35:04.099069  358343 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 10:35:04.099152  358343 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 10:35:04.099270  358343 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 10:35:04.099293  358343 kubeadm.go:319] 
	I1115 10:35:04.099366  358343 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 10:35:04.099378  358343 kubeadm.go:319] 
	I1115 10:35:04.099446  358343 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 10:35:04.099456  358343 kubeadm.go:319] 
	I1115 10:35:04.099530  358343 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 10:35:04.099646  358343 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 10:35:04.099741  358343 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 10:35:04.099750  358343 kubeadm.go:319] 
	I1115 10:35:04.099881  358343 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 10:35:04.100020  358343 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 10:35:04.100033  358343 kubeadm.go:319] 
	I1115 10:35:04.100148  358343 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ob95li.bwu5dbqfa14hsvt0 \
	I1115 10:35:04.100288  358343 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c958101d3a67868123c5e439a6c1ea6e30a99a6538b76cbe461e2c919cf1aec \
	I1115 10:35:04.100317  358343 kubeadm.go:319] 	--control-plane 
	I1115 10:35:04.100323  358343 kubeadm.go:319] 
	I1115 10:35:04.100427  358343 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 10:35:04.100436  358343 kubeadm.go:319] 
	I1115 10:35:04.100540  358343 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ob95li.bwu5dbqfa14hsvt0 \
	I1115 10:35:04.100692  358343 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c958101d3a67868123c5e439a6c1ea6e30a99a6538b76cbe461e2c919cf1aec 
	I1115 10:35:04.103489  358343 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1115 10:35:04.103671  358343 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1115 10:35:04.103762  358343 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 10:35:04.103783  358343 cni.go:84] Creating CNI manager for ""
	I1115 10:35:04.103792  358343 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:35:04.105369  358343 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1115 10:35:03.470218  368849 cli_runner.go:164] Run: docker network inspect no-preload-283677 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:35:03.490738  368849 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1115 10:35:03.496698  368849 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:35:03.510823  368849 kubeadm.go:884] updating cluster {Name:no-preload-283677 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-283677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:35:03.511006  368849 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:35:03.511057  368849 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:35:03.547890  368849 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:35:03.547916  368849 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:35:03.547926  368849 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1115 10:35:03.548063  368849 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-283677 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-283677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:35:03.548166  368849 ssh_runner.go:195] Run: crio config
	I1115 10:35:03.599181  368849 cni.go:84] Creating CNI manager for ""
	I1115 10:35:03.599206  368849 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:35:03.599223  368849 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:35:03.599244  368849 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-283677 NodeName:no-preload-283677 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:35:03.599372  368849 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-283677"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:35:03.599441  368849 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:35:03.610310  368849 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:35:03.610397  368849 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:35:03.619706  368849 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1115 10:35:03.632722  368849 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:35:03.645918  368849 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1115 10:35:03.658741  368849 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:35:03.662232  368849 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:35:03.671761  368849 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:03.756659  368849 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:35:03.786378  368849 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677 for IP: 192.168.76.2
	I1115 10:35:03.786402  368849 certs.go:195] generating shared ca certs ...
	I1115 10:35:03.786422  368849 certs.go:227] acquiring lock for ca certs: {Name:mkec8c0ab2b564f4fafe1f7fc86089efa994f6db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:03.786604  368849 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key
	I1115 10:35:03.786672  368849 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key
	I1115 10:35:03.786685  368849 certs.go:257] generating profile certs ...
	I1115 10:35:03.786797  368849 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/client.key
	I1115 10:35:03.786865  368849 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/apiserver.key.d18d8ebf
	I1115 10:35:03.786925  368849 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/proxy-client.key
	I1115 10:35:03.787095  368849 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem (1338 bytes)
	W1115 10:35:03.787136  368849 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962_empty.pem, impossibly tiny 0 bytes
	I1115 10:35:03.787149  368849 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:35:03.787190  368849 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:35:03.787228  368849 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:35:03.787263  368849 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem (1679 bytes)
	I1115 10:35:03.787329  368849 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:35:03.788176  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:35:03.809608  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:35:03.829918  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:35:03.850004  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:35:03.882797  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1115 10:35:03.974262  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 10:35:03.996550  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:35:04.017706  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 10:35:04.035832  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem --> /usr/share/ca-certificates/58962.pem (1338 bytes)
	I1115 10:35:04.053680  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /usr/share/ca-certificates/589622.pem (1708 bytes)
	I1115 10:35:04.072674  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:35:04.091110  368849 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:35:04.106710  368849 ssh_runner.go:195] Run: openssl version
	I1115 10:35:04.113684  368849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:35:04.123025  368849 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:35:04.127895  368849 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:41 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:35:04.127949  368849 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:35:04.173742  368849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:35:04.183070  368849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/58962.pem && ln -fs /usr/share/ca-certificates/58962.pem /etc/ssl/certs/58962.pem"
	I1115 10:35:04.192820  368849 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/58962.pem
	I1115 10:35:04.197810  368849 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:47 /usr/share/ca-certificates/58962.pem
	I1115 10:35:04.197877  368849 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/58962.pem
	I1115 10:35:04.238270  368849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/58962.pem /etc/ssl/certs/51391683.0"
	I1115 10:35:04.249044  368849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/589622.pem && ln -fs /usr/share/ca-certificates/589622.pem /etc/ssl/certs/589622.pem"
	I1115 10:35:04.260640  368849 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/589622.pem
	I1115 10:35:04.265573  368849 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:47 /usr/share/ca-certificates/589622.pem
	I1115 10:35:04.265640  368849 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/589622.pem
	I1115 10:35:04.304857  368849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/589622.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:35:04.316678  368849 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:35:04.321538  368849 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 10:35:04.391497  368849 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 10:35:04.568753  368849 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 10:35:04.685855  368849 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 10:35:04.802487  368849 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 10:35:04.896806  368849 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 10:35:05.014514  368849 kubeadm.go:401] StartCluster: {Name:no-preload-283677 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-283677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:35:05.014628  368849 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:35:05.014704  368849 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:35:05.102808  368849 cri.go:89] found id: "324a3ff1cd89d80c17c217faf6db6f2b6c9f52f5abe13f2e83485e1e03b0c7aa"
	I1115 10:35:05.102868  368849 cri.go:89] found id: "8c532dc6e69808afe1a0ffb587f828c0b3f6fa37c71b2fbc5ce4abdafdedf008"
	I1115 10:35:05.102874  368849 cri.go:89] found id: "ac246fc71f81d0fb5e3cd430730c903f2ca388376feb3a8dc321eb565aa6c5ee"
	I1115 10:35:05.102879  368849 cri.go:89] found id: "c26ba954b1e2fc1ea4ef12fc0801c0a31171b23e67cf48fdeb9207cbdb3ba3b0"
	I1115 10:35:05.102883  368849 cri.go:89] found id: ""
	I1115 10:35:05.102973  368849 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 10:35:05.170451  368849 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:35:05Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:35:05.170545  368849 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:35:05.180340  368849 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 10:35:05.180361  368849 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 10:35:05.180411  368849 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 10:35:05.189950  368849 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:35:05.190767  368849 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-283677" does not appear in /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:35:05.192333  368849 kubeconfig.go:62] /home/jenkins/minikube-integration/21894-55448/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-283677" cluster setting kubeconfig missing "no-preload-283677" context setting]
	I1115 10:35:05.193068  368849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:05.194778  368849 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 10:35:05.205108  368849 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1115 10:35:05.205141  368849 kubeadm.go:602] duration metric: took 24.774201ms to restartPrimaryControlPlane
	I1115 10:35:05.205152  368849 kubeadm.go:403] duration metric: took 190.652551ms to StartCluster
	I1115 10:35:05.205176  368849 settings.go:142] acquiring lock: {Name:mk94cfac0f6eef2479181468a7a8082a6e5c8f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:05.205246  368849 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:35:05.206385  368849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:05.206642  368849 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:35:05.207102  368849 config.go:182] Loaded profile config "no-preload-283677": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:05.207057  368849 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:35:05.207165  368849 addons.go:70] Setting storage-provisioner=true in profile "no-preload-283677"
	I1115 10:35:05.207181  368849 addons.go:239] Setting addon storage-provisioner=true in "no-preload-283677"
	W1115 10:35:05.207190  368849 addons.go:248] addon storage-provisioner should already be in state true
	I1115 10:35:05.207190  368849 addons.go:70] Setting dashboard=true in profile "no-preload-283677"
	I1115 10:35:05.207217  368849 addons.go:239] Setting addon dashboard=true in "no-preload-283677"
	I1115 10:35:05.207212  368849 addons.go:70] Setting default-storageclass=true in profile "no-preload-283677"
	W1115 10:35:05.207233  368849 addons.go:248] addon dashboard should already be in state true
	I1115 10:35:05.207275  368849 host.go:66] Checking if "no-preload-283677" exists ...
	I1115 10:35:05.207221  368849 host.go:66] Checking if "no-preload-283677" exists ...
	I1115 10:35:05.207358  368849 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-283677"
	I1115 10:35:05.207703  368849 cli_runner.go:164] Run: docker container inspect no-preload-283677 --format={{.State.Status}}
	I1115 10:35:05.207808  368849 cli_runner.go:164] Run: docker container inspect no-preload-283677 --format={{.State.Status}}
	I1115 10:35:05.207815  368849 cli_runner.go:164] Run: docker container inspect no-preload-283677 --format={{.State.Status}}
	I1115 10:35:05.211477  368849 out.go:179] * Verifying Kubernetes components...
	I1115 10:35:05.213140  368849 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:05.232351  368849 addons.go:239] Setting addon default-storageclass=true in "no-preload-283677"
	W1115 10:35:05.232370  368849 addons.go:248] addon default-storageclass should already be in state true
	I1115 10:35:05.232392  368849 host.go:66] Checking if "no-preload-283677" exists ...
	I1115 10:35:05.232689  368849 cli_runner.go:164] Run: docker container inspect no-preload-283677 --format={{.State.Status}}
	I1115 10:35:05.232981  368849 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:35:05.232986  368849 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1115 10:35:05.234251  368849 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:35:05.234272  368849 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:35:05.234273  368849 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1115 10:35:05.234330  368849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:35:05.238080  368849 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1115 10:35:05.238101  368849 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1115 10:35:05.238157  368849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:35:05.254202  368849 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:35:05.254227  368849 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:35:05.254298  368849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:35:05.257406  368849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/no-preload-283677/id_rsa Username:docker}
	I1115 10:35:05.259999  368849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/no-preload-283677/id_rsa Username:docker}
	I1115 10:35:05.279539  368849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/no-preload-283677/id_rsa Username:docker}
	I1115 10:35:05.585009  368849 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1115 10:35:05.585042  368849 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1115 10:35:05.590684  368849 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:35:05.602650  368849 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1115 10:35:05.602676  368849 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1115 10:35:05.672946  368849 node_ready.go:35] waiting up to 6m0s for node "no-preload-283677" to be "Ready" ...
	I1115 10:35:05.684403  368849 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1115 10:35:05.684432  368849 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1115 10:35:05.690190  368849 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:35:05.692466  368849 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:35:05.769359  368849 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1115 10:35:05.769382  368849 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1115 10:35:05.787603  368849 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1115 10:35:05.787632  368849 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1115 10:35:05.883926  368849 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1115 10:35:05.883964  368849 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1115 10:35:05.974542  368849 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1115 10:35:05.974570  368849 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1115 10:35:05.992886  368849 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1115 10:35:05.992918  368849 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1115 10:35:06.012080  368849 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:35:06.012115  368849 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1115 10:35:06.084770  368849 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1115 10:35:03.268007  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	W1115 10:35:05.764688  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	I1115 10:35:02.470272  367608 out.go:252]   - Generating certificates and keys ...
	I1115 10:35:02.470390  367608 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 10:35:02.470490  367608 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 10:35:02.779536  367608 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 10:35:02.945500  367608 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 10:35:03.605573  367608 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 10:35:03.703228  367608 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 10:35:04.283194  367608 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 10:35:04.283412  367608 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-026691 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1115 10:35:04.682718  367608 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 10:35:04.683098  367608 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-026691 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1115 10:35:05.030500  367608 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 10:35:05.382333  367608 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 10:35:06.139095  367608 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 10:35:06.139385  367608 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 10:35:06.418023  367608 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 10:35:06.723330  367608 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 10:35:07.482824  367608 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 10:35:08.034181  367608 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 10:35:08.156422  367608 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 10:35:08.157215  367608 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 10:35:08.161626  367608 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1115 10:35:04.106620  358343 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 10:35:04.111162  358343 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1115 10:35:04.111192  358343 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 10:35:04.124905  358343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 10:35:04.382718  358343 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 10:35:04.382786  358343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:04.382833  358343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-719574 minikube.k8s.io/updated_at=2025_11_15T10_35_04_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510 minikube.k8s.io/name=embed-certs-719574 minikube.k8s.io/primary=true
	I1115 10:35:04.628907  358343 ops.go:34] apiserver oom_adj: -16
	I1115 10:35:04.629011  358343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:05.129861  358343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:05.629943  358343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:06.129410  358343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:06.629879  358343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:07.129154  358343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:07.630326  358343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:08.129680  358343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:08.629294  358343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:08.746205  358343 kubeadm.go:1114] duration metric: took 4.363478497s to wait for elevateKubeSystemPrivileges
	I1115 10:35:08.746256  358343 kubeadm.go:403] duration metric: took 20.927857879s to StartCluster
	I1115 10:35:08.746281  358343 settings.go:142] acquiring lock: {Name:mk94cfac0f6eef2479181468a7a8082a6e5c8f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:08.746351  358343 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:35:08.748593  358343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:08.748832  358343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 10:35:08.748841  358343 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:35:08.749290  358343 config.go:182] Loaded profile config "embed-certs-719574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:08.749362  358343 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:35:08.749448  358343 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-719574"
	I1115 10:35:08.749468  358343 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-719574"
	I1115 10:35:08.749501  358343 host.go:66] Checking if "embed-certs-719574" exists ...
	I1115 10:35:08.751286  358343 addons.go:70] Setting default-storageclass=true in profile "embed-certs-719574"
	I1115 10:35:08.751326  358343 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-719574"
	I1115 10:35:08.751768  358343 cli_runner.go:164] Run: docker container inspect embed-certs-719574 --format={{.State.Status}}
	I1115 10:35:08.752060  358343 cli_runner.go:164] Run: docker container inspect embed-certs-719574 --format={{.State.Status}}
	I1115 10:35:08.756196  358343 out.go:179] * Verifying Kubernetes components...
	I1115 10:35:08.757464  358343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:08.784018  358343 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:35:08.785232  358343 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:35:08.785253  358343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:35:08.785418  358343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:08.788366  358343 addons.go:239] Setting addon default-storageclass=true in "embed-certs-719574"
	I1115 10:35:08.788420  358343 host.go:66] Checking if "embed-certs-719574" exists ...
	I1115 10:35:08.788915  358343 cli_runner.go:164] Run: docker container inspect embed-certs-719574 --format={{.State.Status}}
	I1115 10:35:08.826800  358343 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:35:08.826832  358343 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:35:08.826903  358343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:08.829210  358343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/embed-certs-719574/id_rsa Username:docker}
	I1115 10:35:08.860334  358343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/embed-certs-719574/id_rsa Username:docker}
	I1115 10:35:07.982406  368849 node_ready.go:49] node "no-preload-283677" is "Ready"
	I1115 10:35:07.982441  368849 node_ready.go:38] duration metric: took 2.309447891s for node "no-preload-283677" to be "Ready" ...
	I1115 10:35:07.982458  368849 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:35:07.982514  368849 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:35:08.305043  368849 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.614815003s)
	I1115 10:35:09.469849  368849 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.777345954s)
	I1115 10:35:09.570449  368849 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.587920258s)
	I1115 10:35:09.570502  368849 api_server.go:72] duration metric: took 4.363836242s to wait for apiserver process to appear ...
	I1115 10:35:09.570512  368849 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:35:09.570533  368849 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:35:09.571399  368849 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.485550405s)
	I1115 10:35:09.577304  368849 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:35:09.577335  368849 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:35:09.615635  368849 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-283677 addons enable metrics-server
	
	I1115 10:35:09.664183  368849 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1115 10:35:09.099411  358343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 10:35:09.117209  358343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:35:09.178025  358343 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:35:09.223129  358343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:35:09.623693  358343 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1115 10:35:10.025397  358343 node_ready.go:35] waiting up to 6m0s for node "embed-certs-719574" to be "Ready" ...
	I1115 10:35:10.035887  358343 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1115 10:35:09.713917  368849 addons.go:515] duration metric: took 4.506959144s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1115 10:35:10.070918  368849 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:35:10.081303  368849 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1115 10:35:10.083487  368849 api_server.go:141] control plane version: v1.34.1
	I1115 10:35:10.083520  368849 api_server.go:131] duration metric: took 513.000945ms to wait for apiserver health ...
	I1115 10:35:10.083532  368849 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:35:10.088663  368849 system_pods.go:59] 8 kube-system pods found
	I1115 10:35:10.088717  368849 system_pods.go:61] "coredns-66bc5c9577-66nkj" [077957ec-b312-4412-a6b1-ae36eb2e7e16] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:35:10.088737  368849 system_pods.go:61] "etcd-no-preload-283677" [bf5ec52e-181c-4b5c-abb2-80ac3fcc26ef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:35:10.088745  368849 system_pods.go:61] "kindnet-x5rwg" [e504759b-46cd-4a41-a8cd-050722131a7d] Running
	I1115 10:35:10.088754  368849 system_pods.go:61] "kube-apiserver-no-preload-283677" [a1c78910-24db-4447-bfb5-f0dd4685d2b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:35:10.088761  368849 system_pods.go:61] "kube-controller-manager-no-preload-283677" [c7c2ba73-517d-48fc-b874-2ab3b653c5a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:35:10.088771  368849 system_pods.go:61] "kube-proxy-vjbxg" [68dffa75-569b-42ef-b4b2-c02a9c1938e7] Running
	I1115 10:35:10.088779  368849 system_pods.go:61] "kube-scheduler-no-preload-283677" [9e0abc54-bc72-4122-b46f-08a74328972d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:35:10.088786  368849 system_pods.go:61] "storage-provisioner" [24222831-4bc3-4c24-87ba-fd523a1e0c85] Running
	I1115 10:35:10.088797  368849 system_pods.go:74] duration metric: took 5.256404ms to wait for pod list to return data ...
	I1115 10:35:10.088807  368849 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:35:10.091629  368849 default_sa.go:45] found service account: "default"
	I1115 10:35:10.091653  368849 default_sa.go:55] duration metric: took 2.838862ms for default service account to be created ...
	I1115 10:35:10.091661  368849 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:35:10.094315  368849 system_pods.go:86] 8 kube-system pods found
	I1115 10:35:10.094343  368849 system_pods.go:89] "coredns-66bc5c9577-66nkj" [077957ec-b312-4412-a6b1-ae36eb2e7e16] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:35:10.094352  368849 system_pods.go:89] "etcd-no-preload-283677" [bf5ec52e-181c-4b5c-abb2-80ac3fcc26ef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:35:10.094358  368849 system_pods.go:89] "kindnet-x5rwg" [e504759b-46cd-4a41-a8cd-050722131a7d] Running
	I1115 10:35:10.094364  368849 system_pods.go:89] "kube-apiserver-no-preload-283677" [a1c78910-24db-4447-bfb5-f0dd4685d2b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:35:10.094370  368849 system_pods.go:89] "kube-controller-manager-no-preload-283677" [c7c2ba73-517d-48fc-b874-2ab3b653c5a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:35:10.094375  368849 system_pods.go:89] "kube-proxy-vjbxg" [68dffa75-569b-42ef-b4b2-c02a9c1938e7] Running
	I1115 10:35:10.094380  368849 system_pods.go:89] "kube-scheduler-no-preload-283677" [9e0abc54-bc72-4122-b46f-08a74328972d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:35:10.094385  368849 system_pods.go:89] "storage-provisioner" [24222831-4bc3-4c24-87ba-fd523a1e0c85] Running
	I1115 10:35:10.094397  368849 system_pods.go:126] duration metric: took 2.730305ms to wait for k8s-apps to be running ...
	I1115 10:35:10.094406  368849 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:35:10.094448  368849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:35:10.111038  368849 system_svc.go:56] duration metric: took 16.619407ms WaitForService to wait for kubelet
	I1115 10:35:10.111085  368849 kubeadm.go:587] duration metric: took 4.90441795s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:35:10.111109  368849 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:35:10.115110  368849 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:35:10.115139  368849 node_conditions.go:123] node cpu capacity is 8
	I1115 10:35:10.115152  368849 node_conditions.go:105] duration metric: took 4.037488ms to run NodePressure ...
	I1115 10:35:10.115164  368849 start.go:242] waiting for startup goroutines ...
	I1115 10:35:10.115171  368849 start.go:247] waiting for cluster config update ...
	I1115 10:35:10.115181  368849 start.go:256] writing updated cluster config ...
	I1115 10:35:10.115423  368849 ssh_runner.go:195] Run: rm -f paused
	I1115 10:35:10.120133  368849 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:35:10.125364  368849 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-66nkj" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 10:35:07.766656  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	W1115 10:35:09.768019  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	W1115 10:35:12.265348  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	I1115 10:35:08.162939  367608 out.go:252]   - Booting up control plane ...
	I1115 10:35:08.163067  367608 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 10:35:08.163214  367608 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 10:35:08.164559  367608 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 10:35:08.191442  367608 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 10:35:08.191597  367608 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1115 10:35:08.204536  367608 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1115 10:35:08.204949  367608 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 10:35:08.205027  367608 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 10:35:08.354479  367608 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1115 10:35:08.354645  367608 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1115 10:35:08.861234  367608 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 506.557584ms
	I1115 10:35:08.866845  367608 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 10:35:08.866999  367608 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1115 10:35:08.867498  367608 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 10:35:08.867607  367608 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1115 10:35:11.654618  367608 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.787069529s
	I1115 10:35:10.037459  358343 addons.go:515] duration metric: took 1.288097776s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1115 10:35:10.128075  358343 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-719574" context rescaled to 1 replicas
	W1115 10:35:12.028773  358343 node_ready.go:57] node "embed-certs-719574" has "Ready":"False" status (will retry)
	I1115 10:35:12.738784  367608 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.871726778s
	I1115 10:35:14.368847  367608 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501769669s
	I1115 10:35:14.386759  367608 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 10:35:14.403947  367608 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 10:35:14.415210  367608 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 10:35:14.415430  367608 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-026691 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 10:35:14.423662  367608 kubeadm.go:319] [bootstrap-token] Using token: la4gix.ai6olk5ks1jiibdz
	I1115 10:35:14.424934  367608 out.go:252]   - Configuring RBAC rules ...
	I1115 10:35:14.425149  367608 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 10:35:14.429405  367608 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 10:35:14.436815  367608 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 10:35:14.440353  367608 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 10:35:14.443132  367608 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 10:35:14.445801  367608 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 10:35:14.780630  367608 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 10:35:15.244870  367608 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 10:35:15.776235  367608 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 10:35:15.777454  367608 kubeadm.go:319] 
	I1115 10:35:15.777560  367608 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 10:35:15.777580  367608 kubeadm.go:319] 
	I1115 10:35:15.777679  367608 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 10:35:15.777709  367608 kubeadm.go:319] 
	I1115 10:35:15.777773  367608 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 10:35:15.777885  367608 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 10:35:15.777990  367608 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 10:35:15.778001  367608 kubeadm.go:319] 
	I1115 10:35:15.778075  367608 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 10:35:15.778084  367608 kubeadm.go:319] 
	I1115 10:35:15.778150  367608 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 10:35:15.778161  367608 kubeadm.go:319] 
	I1115 10:35:15.778232  367608 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 10:35:15.778338  367608 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 10:35:15.778434  367608 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 10:35:15.778441  367608 kubeadm.go:319] 
	I1115 10:35:15.778545  367608 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 10:35:15.778663  367608 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 10:35:15.778670  367608 kubeadm.go:319] 
	I1115 10:35:15.778785  367608 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token la4gix.ai6olk5ks1jiibdz \
	I1115 10:35:15.778928  367608 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c958101d3a67868123c5e439a6c1ea6e30a99a6538b76cbe461e2c919cf1aec \
	I1115 10:35:15.778967  367608 kubeadm.go:319] 	--control-plane 
	I1115 10:35:15.778973  367608 kubeadm.go:319] 
	I1115 10:35:15.779089  367608 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 10:35:15.779096  367608 kubeadm.go:319] 
	I1115 10:35:15.779206  367608 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token la4gix.ai6olk5ks1jiibdz \
	I1115 10:35:15.779345  367608 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c958101d3a67868123c5e439a6c1ea6e30a99a6538b76cbe461e2c919cf1aec 
	I1115 10:35:15.783505  367608 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1115 10:35:15.783826  367608 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1115 10:35:15.784060  367608 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 10:35:15.784094  367608 cni.go:84] Creating CNI manager for ""
	I1115 10:35:15.784108  367608 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:35:15.786778  367608 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1115 10:35:12.132013  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:14.175249  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:16.631763  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:14.265828  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	W1115 10:35:16.764850  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	I1115 10:35:15.788182  367608 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 10:35:15.793094  367608 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1115 10:35:15.793115  367608 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 10:35:15.809048  367608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 10:35:16.098742  367608 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 10:35:16.098819  367608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:16.098855  367608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-026691 minikube.k8s.io/updated_at=2025_11_15T10_35_16_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510 minikube.k8s.io/name=default-k8s-diff-port-026691 minikube.k8s.io/primary=true
	I1115 10:35:16.112393  367608 ops.go:34] apiserver oom_adj: -16
	I1115 10:35:16.271409  367608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:16.771783  367608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:17.271668  367608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1115 10:35:14.029094  358343 node_ready.go:57] node "embed-certs-719574" has "Ready":"False" status (will retry)
	W1115 10:35:16.031395  358343 node_ready.go:57] node "embed-certs-719574" has "Ready":"False" status (will retry)
	W1115 10:35:18.528752  358343 node_ready.go:57] node "embed-certs-719574" has "Ready":"False" status (will retry)
	I1115 10:35:17.772413  367608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:18.271612  367608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:18.772434  367608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:19.272327  367608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:19.771542  367608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:20.271635  367608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:20.771571  367608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:20.950683  367608 kubeadm.go:1114] duration metric: took 4.851926991s to wait for elevateKubeSystemPrivileges
	I1115 10:35:20.950730  367608 kubeadm.go:403] duration metric: took 18.793128713s to StartCluster
	I1115 10:35:20.950755  367608 settings.go:142] acquiring lock: {Name:mk94cfac0f6eef2479181468a7a8082a6e5c8f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:20.950836  367608 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:35:20.954212  367608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:20.954530  367608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 10:35:20.954547  367608 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:35:20.954629  367608 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:35:20.954736  367608 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-026691"
	I1115 10:35:20.954764  367608 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-026691"
	I1115 10:35:20.954792  367608 config.go:182] Loaded profile config "default-k8s-diff-port-026691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:20.954800  367608 host.go:66] Checking if "default-k8s-diff-port-026691" exists ...
	I1115 10:35:20.954806  367608 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-026691"
	I1115 10:35:20.955146  367608 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-026691"
	I1115 10:35:20.955492  367608 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:35:20.955510  367608 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:35:20.956132  367608 out.go:179] * Verifying Kubernetes components...
	I1115 10:35:20.957534  367608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:20.983066  367608 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-026691"
	I1115 10:35:20.983119  367608 host.go:66] Checking if "default-k8s-diff-port-026691" exists ...
	I1115 10:35:20.983674  367608 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:35:20.983883  367608 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:35:20.985223  367608 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:35:20.985248  367608 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:35:20.985304  367608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:35:21.009815  367608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:35:21.012487  367608 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:35:21.012509  367608 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:35:21.012558  367608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:35:21.043532  367608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:35:21.227388  367608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 10:35:21.242981  367608 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:35:21.243543  367608 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:35:21.345690  367608 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:35:21.760244  367608 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1115 10:35:21.984321  367608 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-026691" to be "Ready" ...
	I1115 10:35:21.984944  367608 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1115 10:35:19.130395  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:21.131716  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:19.266357  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	W1115 10:35:21.765103  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	I1115 10:35:21.986234  367608 addons.go:515] duration metric: took 1.031599786s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1115 10:35:22.264566  367608 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-026691" context rescaled to 1 replicas
	I1115 10:35:20.529126  358343 node_ready.go:49] node "embed-certs-719574" is "Ready"
	I1115 10:35:20.529163  358343 node_ready.go:38] duration metric: took 10.503731212s for node "embed-certs-719574" to be "Ready" ...
	I1115 10:35:20.529181  358343 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:35:20.529240  358343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:35:20.545196  358343 api_server.go:72] duration metric: took 11.796320759s to wait for apiserver process to appear ...
	I1115 10:35:20.545225  358343 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:35:20.545247  358343 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1115 10:35:20.549570  358343 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1115 10:35:20.550653  358343 api_server.go:141] control plane version: v1.34.1
	I1115 10:35:20.550677  358343 api_server.go:131] duration metric: took 5.444907ms to wait for apiserver health ...
	I1115 10:35:20.550686  358343 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:35:20.554086  358343 system_pods.go:59] 8 kube-system pods found
	I1115 10:35:20.554122  358343 system_pods.go:61] "coredns-66bc5c9577-fjzk5" [d4d185bc-88ec-4edb-b250-6a59ee426bf5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:35:20.554130  358343 system_pods.go:61] "etcd-embed-certs-719574" [293cb467-4f15-417b-944e-457c3ac7e56f] Running
	I1115 10:35:20.554138  358343 system_pods.go:61] "kindnet-ql2r4" [224e2951-6c97-449d-8ff8-f72aa6d36d60] Running
	I1115 10:35:20.554143  358343 system_pods.go:61] "kube-apiserver-embed-certs-719574" [43a83385-040c-470f-8242-5066617ac8d7] Running
	I1115 10:35:20.554152  358343 system_pods.go:61] "kube-controller-manager-embed-certs-719574" [78f16cd1-774e-4556-9db6-bca54bc8214b] Running
	I1115 10:35:20.554156  358343 system_pods.go:61] "kube-proxy-kmc8c" [3534c76c-4b99-4b84-ba00-21d0d49e770f] Running
	I1115 10:35:20.554161  358343 system_pods.go:61] "kube-scheduler-embed-certs-719574" [9b8d2d78-442d-48cc-88be-29b9eb7017e7] Running
	I1115 10:35:20.554169  358343 system_pods.go:61] "storage-provisioner" [39c3baf2-24de-475e-aeef-a10825991ca3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:35:20.554182  358343 system_pods.go:74] duration metric: took 3.483657ms to wait for pod list to return data ...
	I1115 10:35:20.554197  358343 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:35:20.556665  358343 default_sa.go:45] found service account: "default"
	I1115 10:35:20.556685  358343 default_sa.go:55] duration metric: took 2.480305ms for default service account to be created ...
	I1115 10:35:20.556695  358343 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:35:20.559910  358343 system_pods.go:86] 8 kube-system pods found
	I1115 10:35:20.559938  358343 system_pods.go:89] "coredns-66bc5c9577-fjzk5" [d4d185bc-88ec-4edb-b250-6a59ee426bf5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:35:20.559965  358343 system_pods.go:89] "etcd-embed-certs-719574" [293cb467-4f15-417b-944e-457c3ac7e56f] Running
	I1115 10:35:20.559978  358343 system_pods.go:89] "kindnet-ql2r4" [224e2951-6c97-449d-8ff8-f72aa6d36d60] Running
	I1115 10:35:20.559986  358343 system_pods.go:89] "kube-apiserver-embed-certs-719574" [43a83385-040c-470f-8242-5066617ac8d7] Running
	I1115 10:35:20.559993  358343 system_pods.go:89] "kube-controller-manager-embed-certs-719574" [78f16cd1-774e-4556-9db6-bca54bc8214b] Running
	I1115 10:35:20.560001  358343 system_pods.go:89] "kube-proxy-kmc8c" [3534c76c-4b99-4b84-ba00-21d0d49e770f] Running
	I1115 10:35:20.560007  358343 system_pods.go:89] "kube-scheduler-embed-certs-719574" [9b8d2d78-442d-48cc-88be-29b9eb7017e7] Running
	I1115 10:35:20.560018  358343 system_pods.go:89] "storage-provisioner" [39c3baf2-24de-475e-aeef-a10825991ca3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:35:20.560051  358343 retry.go:31] will retry after 304.306696ms: missing components: kube-dns
	I1115 10:35:20.869745  358343 system_pods.go:86] 8 kube-system pods found
	I1115 10:35:20.870073  358343 system_pods.go:89] "coredns-66bc5c9577-fjzk5" [d4d185bc-88ec-4edb-b250-6a59ee426bf5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:35:20.870105  358343 system_pods.go:89] "etcd-embed-certs-719574" [293cb467-4f15-417b-944e-457c3ac7e56f] Running
	I1115 10:35:20.870140  358343 system_pods.go:89] "kindnet-ql2r4" [224e2951-6c97-449d-8ff8-f72aa6d36d60] Running
	I1115 10:35:20.870174  358343 system_pods.go:89] "kube-apiserver-embed-certs-719574" [43a83385-040c-470f-8242-5066617ac8d7] Running
	I1115 10:35:20.870205  358343 system_pods.go:89] "kube-controller-manager-embed-certs-719574" [78f16cd1-774e-4556-9db6-bca54bc8214b] Running
	I1115 10:35:20.870223  358343 system_pods.go:89] "kube-proxy-kmc8c" [3534c76c-4b99-4b84-ba00-21d0d49e770f] Running
	I1115 10:35:20.870251  358343 system_pods.go:89] "kube-scheduler-embed-certs-719574" [9b8d2d78-442d-48cc-88be-29b9eb7017e7] Running
	I1115 10:35:20.870298  358343 system_pods.go:89] "storage-provisioner" [39c3baf2-24de-475e-aeef-a10825991ca3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:35:20.870334  358343 retry.go:31] will retry after 263.535875ms: missing components: kube-dns
	I1115 10:35:21.139822  358343 system_pods.go:86] 8 kube-system pods found
	I1115 10:35:21.139860  358343 system_pods.go:89] "coredns-66bc5c9577-fjzk5" [d4d185bc-88ec-4edb-b250-6a59ee426bf5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:35:21.139867  358343 system_pods.go:89] "etcd-embed-certs-719574" [293cb467-4f15-417b-944e-457c3ac7e56f] Running
	I1115 10:35:21.139875  358343 system_pods.go:89] "kindnet-ql2r4" [224e2951-6c97-449d-8ff8-f72aa6d36d60] Running
	I1115 10:35:21.139879  358343 system_pods.go:89] "kube-apiserver-embed-certs-719574" [43a83385-040c-470f-8242-5066617ac8d7] Running
	I1115 10:35:21.139885  358343 system_pods.go:89] "kube-controller-manager-embed-certs-719574" [78f16cd1-774e-4556-9db6-bca54bc8214b] Running
	I1115 10:35:21.139896  358343 system_pods.go:89] "kube-proxy-kmc8c" [3534c76c-4b99-4b84-ba00-21d0d49e770f] Running
	I1115 10:35:21.139902  358343 system_pods.go:89] "kube-scheduler-embed-certs-719574" [9b8d2d78-442d-48cc-88be-29b9eb7017e7] Running
	I1115 10:35:21.139910  358343 system_pods.go:89] "storage-provisioner" [39c3baf2-24de-475e-aeef-a10825991ca3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:35:21.139934  358343 retry.go:31] will retry after 299.264282ms: missing components: kube-dns
	I1115 10:35:21.445165  358343 system_pods.go:86] 8 kube-system pods found
	I1115 10:35:21.445282  358343 system_pods.go:89] "coredns-66bc5c9577-fjzk5" [d4d185bc-88ec-4edb-b250-6a59ee426bf5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:35:21.445340  358343 system_pods.go:89] "etcd-embed-certs-719574" [293cb467-4f15-417b-944e-457c3ac7e56f] Running
	I1115 10:35:21.445350  358343 system_pods.go:89] "kindnet-ql2r4" [224e2951-6c97-449d-8ff8-f72aa6d36d60] Running
	I1115 10:35:21.445355  358343 system_pods.go:89] "kube-apiserver-embed-certs-719574" [43a83385-040c-470f-8242-5066617ac8d7] Running
	I1115 10:35:21.445361  358343 system_pods.go:89] "kube-controller-manager-embed-certs-719574" [78f16cd1-774e-4556-9db6-bca54bc8214b] Running
	I1115 10:35:21.445366  358343 system_pods.go:89] "kube-proxy-kmc8c" [3534c76c-4b99-4b84-ba00-21d0d49e770f] Running
	I1115 10:35:21.445371  358343 system_pods.go:89] "kube-scheduler-embed-certs-719574" [9b8d2d78-442d-48cc-88be-29b9eb7017e7] Running
	I1115 10:35:21.445392  358343 system_pods.go:89] "storage-provisioner" [39c3baf2-24de-475e-aeef-a10825991ca3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:35:21.445412  358343 retry.go:31] will retry after 557.501681ms: missing components: kube-dns
	I1115 10:35:22.008757  358343 system_pods.go:86] 8 kube-system pods found
	I1115 10:35:22.008809  358343 system_pods.go:89] "coredns-66bc5c9577-fjzk5" [d4d185bc-88ec-4edb-b250-6a59ee426bf5] Running
	I1115 10:35:22.008817  358343 system_pods.go:89] "etcd-embed-certs-719574" [293cb467-4f15-417b-944e-457c3ac7e56f] Running
	I1115 10:35:22.008823  358343 system_pods.go:89] "kindnet-ql2r4" [224e2951-6c97-449d-8ff8-f72aa6d36d60] Running
	I1115 10:35:22.008830  358343 system_pods.go:89] "kube-apiserver-embed-certs-719574" [43a83385-040c-470f-8242-5066617ac8d7] Running
	I1115 10:35:22.008841  358343 system_pods.go:89] "kube-controller-manager-embed-certs-719574" [78f16cd1-774e-4556-9db6-bca54bc8214b] Running
	I1115 10:35:22.008847  358343 system_pods.go:89] "kube-proxy-kmc8c" [3534c76c-4b99-4b84-ba00-21d0d49e770f] Running
	I1115 10:35:22.008856  358343 system_pods.go:89] "kube-scheduler-embed-certs-719574" [9b8d2d78-442d-48cc-88be-29b9eb7017e7] Running
	I1115 10:35:22.008861  358343 system_pods.go:89] "storage-provisioner" [39c3baf2-24de-475e-aeef-a10825991ca3] Running
	I1115 10:35:22.008871  358343 system_pods.go:126] duration metric: took 1.452168821s to wait for k8s-apps to be running ...
	I1115 10:35:22.008883  358343 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:35:22.008946  358343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:35:22.026719  358343 system_svc.go:56] duration metric: took 17.821769ms WaitForService to wait for kubelet
	I1115 10:35:22.026753  358343 kubeadm.go:587] duration metric: took 13.277885015s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:35:22.026782  358343 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:35:22.030378  358343 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:35:22.030411  358343 node_conditions.go:123] node cpu capacity is 8
	I1115 10:35:22.030431  358343 node_conditions.go:105] duration metric: took 3.642261ms to run NodePressure ...
	I1115 10:35:22.030455  358343 start.go:242] waiting for startup goroutines ...
	I1115 10:35:22.030468  358343 start.go:247] waiting for cluster config update ...
	I1115 10:35:22.030481  358343 start.go:256] writing updated cluster config ...
	I1115 10:35:22.030818  358343 ssh_runner.go:195] Run: rm -f paused
	I1115 10:35:22.035757  358343 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:35:22.039154  358343 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fjzk5" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:22.043361  358343 pod_ready.go:94] pod "coredns-66bc5c9577-fjzk5" is "Ready"
	I1115 10:35:22.043378  358343 pod_ready.go:86] duration metric: took 4.200087ms for pod "coredns-66bc5c9577-fjzk5" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:22.045206  358343 pod_ready.go:83] waiting for pod "etcd-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:22.049024  358343 pod_ready.go:94] pod "etcd-embed-certs-719574" is "Ready"
	I1115 10:35:22.049042  358343 pod_ready.go:86] duration metric: took 3.816184ms for pod "etcd-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:22.050972  358343 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:22.054609  358343 pod_ready.go:94] pod "kube-apiserver-embed-certs-719574" is "Ready"
	I1115 10:35:22.054632  358343 pod_ready.go:86] duration metric: took 3.638655ms for pod "kube-apiserver-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:22.056558  358343 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:22.439711  358343 pod_ready.go:94] pod "kube-controller-manager-embed-certs-719574" is "Ready"
	I1115 10:35:22.439736  358343 pod_ready.go:86] duration metric: took 383.156713ms for pod "kube-controller-manager-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:22.640073  358343 pod_ready.go:83] waiting for pod "kube-proxy-kmc8c" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:23.040176  358343 pod_ready.go:94] pod "kube-proxy-kmc8c" is "Ready"
	I1115 10:35:23.040208  358343 pod_ready.go:86] duration metric: took 400.110752ms for pod "kube-proxy-kmc8c" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:23.240598  358343 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:23.639521  358343 pod_ready.go:94] pod "kube-scheduler-embed-certs-719574" is "Ready"
	I1115 10:35:23.639548  358343 pod_ready.go:86] duration metric: took 398.923501ms for pod "kube-scheduler-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:23.639560  358343 pod_ready.go:40] duration metric: took 1.603769873s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:35:23.683447  358343 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:35:23.685103  358343 out.go:179] * Done! kubectl is now configured to use "embed-certs-719574" cluster and "default" namespace by default
	W1115 10:35:23.630738  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:25.631024  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:24.264211  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	W1115 10:35:26.763775  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	W1115 10:35:23.987891  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	W1115 10:35:25.988063  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	W1115 10:35:29.264506  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	I1115 10:35:30.263692  361423 pod_ready.go:94] pod "coredns-5dd5756b68-bdpfv" is "Ready"
	I1115 10:35:30.263719  361423 pod_ready.go:86] duration metric: took 33.505250042s for pod "coredns-5dd5756b68-bdpfv" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:30.266346  361423 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-087235" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:30.270190  361423 pod_ready.go:94] pod "etcd-old-k8s-version-087235" is "Ready"
	I1115 10:35:30.270213  361423 pod_ready.go:86] duration metric: took 3.846822ms for pod "etcd-old-k8s-version-087235" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:30.272557  361423 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-087235" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:30.276198  361423 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-087235" is "Ready"
	I1115 10:35:30.276215  361423 pod_ready.go:86] duration metric: took 3.640479ms for pod "kube-apiserver-old-k8s-version-087235" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:30.278541  361423 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-087235" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:30.461598  361423 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-087235" is "Ready"
	I1115 10:35:30.461629  361423 pod_ready.go:86] duration metric: took 183.068971ms for pod "kube-controller-manager-old-k8s-version-087235" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:30.662428  361423 pod_ready.go:83] waiting for pod "kube-proxy-gl22j" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:31.062369  361423 pod_ready.go:94] pod "kube-proxy-gl22j" is "Ready"
	I1115 10:35:31.062396  361423 pod_ready.go:86] duration metric: took 399.946151ms for pod "kube-proxy-gl22j" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:31.263048  361423 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-087235" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:31.662025  361423 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-087235" is "Ready"
	I1115 10:35:31.662055  361423 pod_ready.go:86] duration metric: took 398.980765ms for pod "kube-scheduler-old-k8s-version-087235" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:31.662070  361423 pod_ready.go:40] duration metric: took 34.909342767s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:35:31.706606  361423 start.go:628] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1115 10:35:31.708343  361423 out.go:203] 
	W1115 10:35:31.709588  361423 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1115 10:35:31.710764  361423 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1115 10:35:31.711983  361423 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-087235" cluster and "default" namespace by default
	W1115 10:35:28.131245  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:30.131470  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:28.487367  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	W1115 10:35:30.987373  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	W1115 10:35:32.630893  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:35.131203  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:32.987818  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	W1115 10:35:35.487942  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	W1115 10:35:37.630661  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:39.631229  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:41.631298  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:37.488558  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	W1115 10:35:39.987301  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	W1115 10:35:41.987883  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 15 10:35:31 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:31.050560271Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:35:31 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:31.05589738Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:35:31 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:31.056471982Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:35:31 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:31.076788197Z" level=info msg="Created container 235ae938bb4114fcf19fc01771df50bcd55d9e0253dd1fc357c036d91b1dbd6d: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-58wdf/dashboard-metrics-scraper" id=8010cbdb-7bec-4bce-90d1-dc4e4f99525c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:35:31 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:31.077414109Z" level=info msg="Starting container: 235ae938bb4114fcf19fc01771df50bcd55d9e0253dd1fc357c036d91b1dbd6d" id=6d33c515-7f00-42ae-894b-75b9535b33bf name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:35:31 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:31.079368598Z" level=info msg="Started container" PID=1812 containerID=235ae938bb4114fcf19fc01771df50bcd55d9e0253dd1fc357c036d91b1dbd6d description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-58wdf/dashboard-metrics-scraper id=6d33c515-7f00-42ae-894b-75b9535b33bf name=/runtime.v1.RuntimeService/StartContainer sandboxID=e0fbb810feadeffbb61460bd785a3b20faec65f04dae7b98210e0b136e0e93d7
	Nov 15 10:35:31 old-k8s-version-087235 conmon[1810]: conmon 235ae938bb4114fcf19f <ninfo>: container 1812 exited with status 1
	Nov 15 10:35:31 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:31.385811844Z" level=info msg="Removing container: 009b1abd319e558a009c9e01ea269a7e703df0fddbd0a74c73f9e2fcfd875758" id=faf72980-254f-43b1-974e-e968aefa14af name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:35:31 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:31.393293421Z" level=info msg="Error loading conmon cgroup of container 009b1abd319e558a009c9e01ea269a7e703df0fddbd0a74c73f9e2fcfd875758: cgroup deleted" id=faf72980-254f-43b1-974e-e968aefa14af name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:35:31 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:31.396879214Z" level=info msg="Removed container 009b1abd319e558a009c9e01ea269a7e703df0fddbd0a74c73f9e2fcfd875758: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-58wdf/dashboard-metrics-scraper" id=faf72980-254f-43b1-974e-e968aefa14af name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:35:34 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:34.02641801Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:35:34 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:34.031044812Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:35:34 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:34.031073767Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:35:34 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:34.031095792Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:35:34 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:34.034877736Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:35:34 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:34.034901349Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:35:34 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:34.034918273Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:35:34 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:34.038475411Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:35:34 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:34.038497108Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:35:34 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:34.038513406Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:35:34 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:34.042218644Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:35:34 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:34.042241969Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:35:34 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:34.042258816Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:35:34 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:34.04602024Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:35:34 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:34.046041888Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	235ae938bb411       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           14 seconds ago      Exited              dashboard-metrics-scraper   2                   e0fbb810feade       dashboard-metrics-scraper-5f989dc9cf-58wdf       kubernetes-dashboard
	141d480cf3b64       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         2                   61175e80abd59       storage-provisioner                              kube-system
	9e7dab2808e72       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   29 seconds ago      Running             kubernetes-dashboard        0                   89607ac1ff73b       kubernetes-dashboard-8694d4445c-sh86n            kubernetes-dashboard
	0e4febf6eeb91       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           52 seconds ago      Running             coredns                     1                   1d5b30ab515c3       coredns-5dd5756b68-bdpfv                         kube-system
	0675b2d0a0d42       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   8eb7ffb1e7780       busybox                                          default
	034573ebc5310       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         1                   61175e80abd59       storage-provisioner                              kube-system
	7594c7c2d6107       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 1                   b00571944eb80       kindnet-7btvm                                    kube-system
	ba78e319d11c5       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           52 seconds ago      Running             kube-proxy                  1                   3da76bdff9320       kube-proxy-gl22j                                 kube-system
	b8b1ccd6451f4       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           56 seconds ago      Running             kube-controller-manager     1                   d182e8cf08c23       kube-controller-manager-old-k8s-version-087235   kube-system
	8ce75f5e9ad57       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           56 seconds ago      Running             kube-scheduler              1                   9c27fede70f64       kube-scheduler-old-k8s-version-087235            kube-system
	3fd62a9dd4769       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           56 seconds ago      Running             kube-apiserver              1                   647e749f01c15       kube-apiserver-old-k8s-version-087235            kube-system
	dabb8b4809806       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           56 seconds ago      Running             etcd                        1                   ae9512cde292a       etcd-old-k8s-version-087235                      kube-system
	
	
	==> coredns [0e4febf6eeb916f0992d7e320785e3dbfccc6cfc0e69f63884d452c516e43258] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54176 - 8302 "HINFO IN 8801084867188015004.839990236938567246. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.014982633s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-087235
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-087235
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=old-k8s-version-087235
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_33_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:33:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-087235
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:35:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:35:23 +0000   Sat, 15 Nov 2025 10:33:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:35:23 +0000   Sat, 15 Nov 2025 10:33:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:35:23 +0000   Sat, 15 Nov 2025 10:33:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:35:23 +0000   Sat, 15 Nov 2025 10:34:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-087235
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                fdfc6964-6bf8-45b6-8dd6-3b0bdf50e4d6
	  Boot ID:                    6a33f504-3a8c-4d49-8fc5-301cf5275efc
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-5dd5756b68-bdpfv                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-old-k8s-version-087235                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m1s
	  kube-system                 kindnet-7btvm                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-old-k8s-version-087235             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-old-k8s-version-087235    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-gl22j                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-old-k8s-version-087235             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-58wdf        0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-sh86n             0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 107s                 kube-proxy       
	  Normal  Starting                 52s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m7s (x8 over 2m7s)  kubelet          Node old-k8s-version-087235 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m7s (x8 over 2m7s)  kubelet          Node old-k8s-version-087235 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m7s (x8 over 2m7s)  kubelet          Node old-k8s-version-087235 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m1s                 kubelet          Node old-k8s-version-087235 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m1s                 kubelet          Node old-k8s-version-087235 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s                 kubelet          Node old-k8s-version-087235 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m1s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           108s                 node-controller  Node old-k8s-version-087235 event: Registered Node old-k8s-version-087235 in Controller
	  Normal  NodeReady                92s                  kubelet          Node old-k8s-version-087235 status is now: NodeReady
	  Normal  Starting                 56s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x9 over 56s)    kubelet          Node old-k8s-version-087235 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)    kubelet          Node old-k8s-version-087235 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x7 over 56s)    kubelet          Node old-k8s-version-087235 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           39s                  node-controller  Node old-k8s-version-087235 event: Registered Node old-k8s-version-087235 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 7f 53 f9 15 93 08 06
	[  +0.017821] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff c6 66 25 e9 45 28 08 06
	[ +19.178294] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 a3 65 b9 28 15 08 06
	[  +0.347929] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 6d df 33 a5 ad 08 06
	[  +0.000319] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff a2 7f 53 f9 15 93 08 06
	[Nov15 10:33] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 96 ed 10 e8 9a 80 08 06
	[  +0.000481] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 a3 65 b9 28 15 08 06
	[ +35.214217] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 c2 9a 56 52 0c 08 06
	[  +9.104720] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 c2 c4 d1 7c dd 08 06
	[Nov15 10:34] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 68 1e 80 53 05 08 06
	[  +0.000444] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 c2 c4 d1 7c dd 08 06
	[ +18.836046] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 98 db 78 4f 0b 08 06
	[  +0.000708] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 c2 9a 56 52 0c 08 06
	
	
	==> etcd [dabb8b48098068214bdf9584f09c135d2dcdd3d138801a98bbacd77829336d90] <==
	{"level":"info","ts":"2025-11-15T10:34:51.061292Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-15T10:34:51.062553Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-11-15T10:34:51.119129Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2025-11-15T10:34:54.548016Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.259207ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790032054591519 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/storage-provisioner\" mod_revision:476 > success:<request_put:<key:\"/registry/pods/kube-system/storage-provisioner\" value_size:3741 >> failure:<request_range:<key:\"/registry/pods/kube-system/storage-provisioner\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-15T10:34:54.548216Z","caller":"traceutil/trace.go:171","msg":"trace[1674528248] linearizableReadLoop","detail":"{readStateIndex:505; appliedIndex:504; }","duration":"122.157455ms","start":"2025-11-15T10:34:54.426042Z","end":"2025-11-15T10:34:54.548199Z","steps":["trace[1674528248] 'read index received'  (duration: 16.127879ms)","trace[1674528248] 'applied index is now lower than readState.Index'  (duration: 106.027911ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-15T10:34:54.548232Z","caller":"traceutil/trace.go:171","msg":"trace[1743705293] transaction","detail":"{read_only:false; response_revision:482; number_of_response:1; }","duration":"123.550657ms","start":"2025-11-15T10:34:54.424657Z","end":"2025-11-15T10:34:54.548207Z","steps":["trace[1743705293] 'process raft request'  (duration: 17.565683ms)","trace[1743705293] 'compare'  (duration: 105.151296ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-15T10:34:54.548306Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.267798ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/kube-public/system:controller:bootstrap-signer\" ","response":"range_response_count:1 size:709"}
	{"level":"info","ts":"2025-11-15T10:34:54.548339Z","caller":"traceutil/trace.go:171","msg":"trace[160029548] range","detail":"{range_begin:/registry/roles/kube-public/system:controller:bootstrap-signer; range_end:; response_count:1; response_revision:482; }","duration":"122.308441ms","start":"2025-11-15T10:34:54.426019Z","end":"2025-11-15T10:34:54.548328Z","steps":["trace[160029548] 'agreement among raft nodes before linearized reading'  (duration: 122.229592ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-15T10:34:55.236379Z","caller":"traceutil/trace.go:171","msg":"trace[498445923] linearizableReadLoop","detail":"{readStateIndex:511; appliedIndex:510; }","duration":"120.078661ms","start":"2025-11-15T10:34:55.116283Z","end":"2025-11-15T10:34:55.236361Z","steps":["trace[498445923] 'read index received'  (duration: 56.05862ms)","trace[498445923] 'applied index is now lower than readState.Index'  (duration: 64.019373ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-15T10:34:55.236533Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.249275ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/ttl-after-finished-controller\" ","response":"range_response_count:1 size:224"}
	{"level":"info","ts":"2025-11-15T10:34:55.236564Z","caller":"traceutil/trace.go:171","msg":"trace[304975410] transaction","detail":"{read_only:false; response_revision:487; number_of_response:1; }","duration":"122.10709ms","start":"2025-11-15T10:34:55.114439Z","end":"2025-11-15T10:34:55.236546Z","steps":["trace[304975410] 'process raft request'  (duration: 57.869653ms)","trace[304975410] 'compare'  (duration: 63.946128ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-15T10:34:55.236573Z","caller":"traceutil/trace.go:171","msg":"trace[646848973] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/ttl-after-finished-controller; range_end:; response_count:1; response_revision:487; }","duration":"120.311415ms","start":"2025-11-15T10:34:55.116252Z","end":"2025-11-15T10:34:55.236563Z","steps":["trace[646848973] 'agreement among raft nodes before linearized reading'  (duration: 120.177488ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-15T10:34:55.491553Z","caller":"traceutil/trace.go:171","msg":"trace[1354959381] linearizableReadLoop","detail":"{readStateIndex:513; appliedIndex:512; }","duration":"158.246756ms","start":"2025-11-15T10:34:55.333291Z","end":"2025-11-15T10:34:55.491538Z","steps":["trace[1354959381] 'read index received'  (duration: 158.143238ms)","trace[1354959381] 'applied index is now lower than readState.Index'  (duration: 102.92µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-15T10:34:55.491673Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.389679ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-11-15T10:34:55.491694Z","caller":"traceutil/trace.go:171","msg":"trace[433858641] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/deployment-controller; range_end:; response_count:1; response_revision:489; }","duration":"158.427797ms","start":"2025-11-15T10:34:55.333261Z","end":"2025-11-15T10:34:55.491689Z","steps":["trace[433858641] 'agreement among raft nodes before linearized reading'  (duration: 158.337252ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-15T10:34:55.491687Z","caller":"traceutil/trace.go:171","msg":"trace[1782841908] transaction","detail":"{read_only:false; response_revision:489; number_of_response:1; }","duration":"167.584811ms","start":"2025-11-15T10:34:55.324083Z","end":"2025-11-15T10:34:55.491668Z","steps":["trace[1782841908] 'process raft request'  (duration: 167.346476ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-15T10:34:55.832277Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"238.66064ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/kubernetes-dashboard/kubernetes-dashboard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-15T10:34:55.832353Z","caller":"traceutil/trace.go:171","msg":"trace[1927462408] range","detail":"{range_begin:/registry/rolebindings/kubernetes-dashboard/kubernetes-dashboard; range_end:; response_count:0; response_revision:490; }","duration":"238.753535ms","start":"2025-11-15T10:34:55.59358Z","end":"2025-11-15T10:34:55.832334Z","steps":["trace[1927462408] 'range keys from in-memory index tree'  (duration: 238.577347ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-15T10:34:55.832286Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"232.35063ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/persistent-volume-binder\" ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2025-11-15T10:34:55.832449Z","caller":"traceutil/trace.go:171","msg":"trace[838577774] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/persistent-volume-binder; range_end:; response_count:1; response_revision:490; }","duration":"232.543354ms","start":"2025-11-15T10:34:55.599892Z","end":"2025-11-15T10:34:55.832435Z","steps":["trace[838577774] 'range keys from in-memory index tree'  (duration: 232.220453ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-15T10:34:56.067618Z","caller":"traceutil/trace.go:171","msg":"trace[1917144964] transaction","detail":"{read_only:false; response_revision:493; number_of_response:1; }","duration":"126.238188ms","start":"2025-11-15T10:34:55.941362Z","end":"2025-11-15T10:34:56.0676Z","steps":["trace[1917144964] 'process raft request'  (duration: 126.110419ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-15T10:34:56.318724Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.889915ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790032054591606 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/secrets/kubernetes-dashboard/kubernetes-dashboard-csrf\" mod_revision:0 > success:<request_put:<key:\"/registry/secrets/kubernetes-dashboard/kubernetes-dashboard-csrf\" value_size:956 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-15T10:34:56.318798Z","caller":"traceutil/trace.go:171","msg":"trace[1770777282] transaction","detail":"{read_only:false; response_revision:494; number_of_response:1; }","duration":"242.777355ms","start":"2025-11-15T10:34:56.076006Z","end":"2025-11-15T10:34:56.318783Z","steps":["trace[1770777282] 'process raft request'  (duration: 116.770885ms)","trace[1770777282] 'compare'  (duration: 125.768788ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-15T10:34:56.517761Z","caller":"traceutil/trace.go:171","msg":"trace[656185099] transaction","detail":"{read_only:false; response_revision:496; number_of_response:1; }","duration":"167.176403ms","start":"2025-11-15T10:34:56.350564Z","end":"2025-11-15T10:34:56.51774Z","steps":["trace[656185099] 'process raft request'  (duration: 122.311543ms)","trace[656185099] 'compare'  (duration: 44.654268ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-15T10:34:56.636944Z","caller":"traceutil/trace.go:171","msg":"trace[470485118] transaction","detail":"{read_only:false; response_revision:497; number_of_response:1; }","duration":"117.651719ms","start":"2025-11-15T10:34:56.519275Z","end":"2025-11-15T10:34:56.636926Z","steps":["trace[470485118] 'process raft request'  (duration: 111.818601ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:35:46 up  2:18,  0 user,  load average: 4.01, 4.40, 2.76
	Linux old-k8s-version-087235 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7594c7c2d610745a399557dd1247f6642b08937e57147358c301470340e5bbb3] <==
	I1115 10:34:53.735476       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:34:53.735923       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1115 10:34:53.736240       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:34:53.736272       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:34:53.736288       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:34:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:34:54.025754       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:34:54.026638       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:34:54.026747       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:34:54.026926       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1115 10:35:24.025947       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1115 10:35:24.026918       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1115 10:35:24.026993       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1115 10:35:24.027004       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1115 10:35:25.627591       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:35:25.627624       1 metrics.go:72] Registering metrics
	I1115 10:35:25.627704       1 controller.go:711] "Syncing nftables rules"
	I1115 10:35:34.026041       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1115 10:35:34.026107       1 main.go:301] handling current node
	I1115 10:35:44.031041       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1115 10:35:44.031090       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3fd62a9dd47699ac165f43ff643bf99a6efeeed696c5fdcd642be6b2a9374ff1] <==
	I1115 10:34:52.932502       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1115 10:34:52.947668       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1115 10:34:52.947704       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	E1115 10:34:52.948162       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high","system"] items=[{},{},{},{},{},{},{},{}]
	I1115 10:34:52.954293       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 10:34:53.018994       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1115 10:34:53.019425       1 shared_informer.go:318] Caches are synced for configmaps
	I1115 10:34:53.020172       1 cache.go:39] Caches are synced for autoregister controller
	E1115 10:34:53.026021       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1115 10:34:53.823564       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:34:54.922256       1 controller.go:624] quota admission added evaluator for: namespaces
	I1115 10:34:55.247976       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1115 10:34:55.499823       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:34:55.837511       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:34:55.859680       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1115 10:34:56.637636       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.195.233"}
	I1115 10:34:56.665265       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.57.213"}
	E1115 10:35:02.948824       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","global-default","leader-election","node-high","system","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	I1115 10:35:06.007617       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1115 10:35:06.147275       1 controller.go:624] quota admission added evaluator for: endpoints
	I1115 10:35:06.150530       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1115 10:35:12.949124       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","global-default","leader-election","node-high","system","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	E1115 10:35:22.950151       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high"] items=[{},{},{},{},{},{},{},{}]
	E1115 10:35:32.950643       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","node-high","system","workload-high","workload-low","catch-all","exempt","global-default"] items=[{},{},{},{},{},{},{},{}]
	E1115 10:35:42.951365       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","global-default","leader-election","node-high","system","workload-high","workload-low","catch-all"] items=[{},{},{},{},{},{},{},{}]
	
	
	==> kube-controller-manager [b8b1ccd6451f4579f89a5a5b4368b0f6ed96c45d344cd9110c94b49fdceb39ed] <==
	I1115 10:35:06.128932       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="106.038636ms"
	I1115 10:35:06.132255       1 shared_informer.go:318] Caches are synced for resource quota
	I1115 10:35:06.133422       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="113.04467ms"
	I1115 10:35:06.139829       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="10.819545ms"
	I1115 10:35:06.140127       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="109.31µs"
	I1115 10:35:06.141142       1 shared_informer.go:318] Caches are synced for crt configmap
	I1115 10:35:06.142131       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="8.616169ms"
	I1115 10:35:06.142325       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="75.326µs"
	I1115 10:35:06.152808       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I1115 10:35:06.219580       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="80.4µs"
	I1115 10:35:06.219815       1 shared_informer.go:318] Caches are synced for stateful set
	I1115 10:35:06.219827       1 shared_informer.go:318] Caches are synced for resource quota
	I1115 10:35:06.219943       1 shared_informer.go:318] Caches are synced for daemon sets
	I1115 10:35:06.543790       1 shared_informer.go:318] Caches are synced for garbage collector
	I1115 10:35:06.612690       1 shared_informer.go:318] Caches are synced for garbage collector
	I1115 10:35:06.612732       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1115 10:35:11.292556       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="99.369µs"
	I1115 10:35:12.337691       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="96.79µs"
	I1115 10:35:13.339992       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="78.703µs"
	I1115 10:35:16.369001       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="9.025904ms"
	I1115 10:35:16.369362       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="132.688µs"
	I1115 10:35:29.877018       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.887332ms"
	I1115 10:35:29.877133       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="70.485µs"
	I1115 10:35:31.396374       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="121.27µs"
	I1115 10:35:36.443505       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="161.761µs"
	
	
	==> kube-proxy [ba78e319d11c588a26d306264073a90262f5ec5da127e677e9bdbe733738df60] <==
	I1115 10:34:53.644611       1 server_others.go:69] "Using iptables proxy"
	I1115 10:34:53.658002       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1115 10:34:53.733465       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:34:53.737099       1 server_others.go:152] "Using iptables Proxier"
	I1115 10:34:53.737136       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1115 10:34:53.737143       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1115 10:34:53.737183       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1115 10:34:53.737470       1 server.go:846] "Version info" version="v1.28.0"
	I1115 10:34:53.737494       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:34:53.738284       1 config.go:188] "Starting service config controller"
	I1115 10:34:53.738365       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1115 10:34:53.738415       1 config.go:315] "Starting node config controller"
	I1115 10:34:53.738443       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1115 10:34:53.738794       1 config.go:97] "Starting endpoint slice config controller"
	I1115 10:34:53.738840       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1115 10:34:53.839151       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1115 10:34:53.839219       1 shared_informer.go:318] Caches are synced for service config
	I1115 10:34:53.839736       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [8ce75f5e9ad57aaaace9af39da481c138fb57073d1fee7bc88e75f67b8b6e7f7] <==
	I1115 10:34:50.253637       1 serving.go:348] Generated self-signed cert in-memory
	W1115 10:34:52.832609       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1115 10:34:52.834009       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1115 10:34:52.834184       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1115 10:34:52.834259       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1115 10:34:53.023507       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1115 10:34:53.023610       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:34:53.027685       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:34:53.027778       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1115 10:34:53.029053       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1115 10:34:53.029481       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1115 10:34:53.128680       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 15 10:35:06 old-k8s-version-087235 kubelet[836]: I1115 10:35:06.224603     836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjv4z\" (UniqueName: \"kubernetes.io/projected/cdb69a62-a600-4d3b-aaec-535c3b64028f-kube-api-access-rjv4z\") pod \"kubernetes-dashboard-8694d4445c-sh86n\" (UID: \"cdb69a62-a600-4d3b-aaec-535c3b64028f\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-sh86n"
	Nov 15 10:35:06 old-k8s-version-087235 kubelet[836]: I1115 10:35:06.224699     836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjcrp\" (UniqueName: \"kubernetes.io/projected/36f671bc-2446-4742-af31-8d43717071b8-kube-api-access-jjcrp\") pod \"dashboard-metrics-scraper-5f989dc9cf-58wdf\" (UID: \"36f671bc-2446-4742-af31-8d43717071b8\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-58wdf"
	Nov 15 10:35:06 old-k8s-version-087235 kubelet[836]: I1115 10:35:06.224766     836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/cdb69a62-a600-4d3b-aaec-535c3b64028f-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-sh86n\" (UID: \"cdb69a62-a600-4d3b-aaec-535c3b64028f\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-sh86n"
	Nov 15 10:35:06 old-k8s-version-087235 kubelet[836]: I1115 10:35:06.224803     836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/36f671bc-2446-4742-af31-8d43717071b8-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-58wdf\" (UID: \"36f671bc-2446-4742-af31-8d43717071b8\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-58wdf"
	Nov 15 10:35:06 old-k8s-version-087235 kubelet[836]: W1115 10:35:06.450642     836 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/3d4715b4872d2f4603f13f0191a01c4322a26432e1316c2ae62b1c571bea1814/crio-e0fbb810feadeffbb61460bd785a3b20faec65f04dae7b98210e0b136e0e93d7 WatchSource:0}: Error finding container e0fbb810feadeffbb61460bd785a3b20faec65f04dae7b98210e0b136e0e93d7: Status 404 returned error can't find the container with id e0fbb810feadeffbb61460bd785a3b20faec65f04dae7b98210e0b136e0e93d7
	Nov 15 10:35:06 old-k8s-version-087235 kubelet[836]: W1115 10:35:06.451561     836 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/3d4715b4872d2f4603f13f0191a01c4322a26432e1316c2ae62b1c571bea1814/crio-89607ac1ff73bb74dac1ac6d3bc00c1684b24c2031a1c798fdf3a7489e6efe24 WatchSource:0}: Error finding container 89607ac1ff73bb74dac1ac6d3bc00c1684b24c2031a1c798fdf3a7489e6efe24: Status 404 returned error can't find the container with id 89607ac1ff73bb74dac1ac6d3bc00c1684b24c2031a1c798fdf3a7489e6efe24
	Nov 15 10:35:11 old-k8s-version-087235 kubelet[836]: I1115 10:35:11.281020     836 scope.go:117] "RemoveContainer" containerID="fd6737949022a48de4ef52058917effe45da619e197fc6ac8936ccc51cdc86d7"
	Nov 15 10:35:12 old-k8s-version-087235 kubelet[836]: I1115 10:35:12.323547     836 scope.go:117] "RemoveContainer" containerID="fd6737949022a48de4ef52058917effe45da619e197fc6ac8936ccc51cdc86d7"
	Nov 15 10:35:12 old-k8s-version-087235 kubelet[836]: I1115 10:35:12.323763     836 scope.go:117] "RemoveContainer" containerID="009b1abd319e558a009c9e01ea269a7e703df0fddbd0a74c73f9e2fcfd875758"
	Nov 15 10:35:12 old-k8s-version-087235 kubelet[836]: E1115 10:35:12.324188     836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-58wdf_kubernetes-dashboard(36f671bc-2446-4742-af31-8d43717071b8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-58wdf" podUID="36f671bc-2446-4742-af31-8d43717071b8"
	Nov 15 10:35:13 old-k8s-version-087235 kubelet[836]: I1115 10:35:13.328611     836 scope.go:117] "RemoveContainer" containerID="009b1abd319e558a009c9e01ea269a7e703df0fddbd0a74c73f9e2fcfd875758"
	Nov 15 10:35:13 old-k8s-version-087235 kubelet[836]: E1115 10:35:13.329095     836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-58wdf_kubernetes-dashboard(36f671bc-2446-4742-af31-8d43717071b8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-58wdf" podUID="36f671bc-2446-4742-af31-8d43717071b8"
	Nov 15 10:35:16 old-k8s-version-087235 kubelet[836]: I1115 10:35:16.427734     836 scope.go:117] "RemoveContainer" containerID="009b1abd319e558a009c9e01ea269a7e703df0fddbd0a74c73f9e2fcfd875758"
	Nov 15 10:35:16 old-k8s-version-087235 kubelet[836]: E1115 10:35:16.428147     836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-58wdf_kubernetes-dashboard(36f671bc-2446-4742-af31-8d43717071b8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-58wdf" podUID="36f671bc-2446-4742-af31-8d43717071b8"
	Nov 15 10:35:24 old-k8s-version-087235 kubelet[836]: I1115 10:35:24.364325     836 scope.go:117] "RemoveContainer" containerID="034573ebc531040d6466ecf78c8b86fefe56032a558c0c6e459de1608b9d81f5"
	Nov 15 10:35:24 old-k8s-version-087235 kubelet[836]: I1115 10:35:24.375421     836 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-sh86n" podStartSLOduration=8.704564179 podCreationTimestamp="2025-11-15 10:35:06 +0000 UTC" firstStartedPulling="2025-11-15 10:35:06.455660291 +0000 UTC m=+17.533431579" lastFinishedPulling="2025-11-15 10:35:16.126455817 +0000 UTC m=+27.204227116" observedRunningTime="2025-11-15 10:35:16.36075414 +0000 UTC m=+27.438525456" watchObservedRunningTime="2025-11-15 10:35:24.375359716 +0000 UTC m=+35.453131020"
	Nov 15 10:35:31 old-k8s-version-087235 kubelet[836]: I1115 10:35:31.047813     836 scope.go:117] "RemoveContainer" containerID="009b1abd319e558a009c9e01ea269a7e703df0fddbd0a74c73f9e2fcfd875758"
	Nov 15 10:35:31 old-k8s-version-087235 kubelet[836]: I1115 10:35:31.384529     836 scope.go:117] "RemoveContainer" containerID="009b1abd319e558a009c9e01ea269a7e703df0fddbd0a74c73f9e2fcfd875758"
	Nov 15 10:35:31 old-k8s-version-087235 kubelet[836]: I1115 10:35:31.384759     836 scope.go:117] "RemoveContainer" containerID="235ae938bb4114fcf19fc01771df50bcd55d9e0253dd1fc357c036d91b1dbd6d"
	Nov 15 10:35:31 old-k8s-version-087235 kubelet[836]: E1115 10:35:31.385171     836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-58wdf_kubernetes-dashboard(36f671bc-2446-4742-af31-8d43717071b8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-58wdf" podUID="36f671bc-2446-4742-af31-8d43717071b8"
	Nov 15 10:35:36 old-k8s-version-087235 kubelet[836]: I1115 10:35:36.427004     836 scope.go:117] "RemoveContainer" containerID="235ae938bb4114fcf19fc01771df50bcd55d9e0253dd1fc357c036d91b1dbd6d"
	Nov 15 10:35:36 old-k8s-version-087235 kubelet[836]: E1115 10:35:36.427286     836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-58wdf_kubernetes-dashboard(36f671bc-2446-4742-af31-8d43717071b8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-58wdf" podUID="36f671bc-2446-4742-af31-8d43717071b8"
	Nov 15 10:35:43 old-k8s-version-087235 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 10:35:43 old-k8s-version-087235 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 10:35:43 old-k8s-version-087235 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [9e7dab2808e72b5ecf4c23f3a0c6c73dc08206c22ebcf5da92da7fd1464ea642] <==
	2025/11/15 10:35:16 Starting overwatch
	2025/11/15 10:35:16 Using namespace: kubernetes-dashboard
	2025/11/15 10:35:16 Using in-cluster config to connect to apiserver
	2025/11/15 10:35:16 Using secret token for csrf signing
	2025/11/15 10:35:16 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/15 10:35:16 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/15 10:35:16 Successful initial request to the apiserver, version: v1.28.0
	2025/11/15 10:35:16 Generating JWE encryption key
	2025/11/15 10:35:16 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/15 10:35:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/15 10:35:16 Initializing JWE encryption key from synchronized object
	2025/11/15 10:35:16 Creating in-cluster Sidecar client
	2025/11/15 10:35:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 10:35:16 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [034573ebc531040d6466ecf78c8b86fefe56032a558c0c6e459de1608b9d81f5] <==
	I1115 10:34:53.549609       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1115 10:35:23.552985       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [141d480cf3b64b6bc24f8f5013f9a931686b80ed7bf8b12a85bcd2b351953257] <==
	I1115 10:35:24.413328       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 10:35:24.421282       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 10:35:24.421333       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1115 10:35:41.817573       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 10:35:41.817707       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c51d799d-ecee-4db4-97cb-68755d563c6e", APIVersion:"v1", ResourceVersion:"653", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-087235_b593027d-8c92-4382-92cf-700cbbe389b8 became leader
	I1115 10:35:41.817748       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-087235_b593027d-8c92-4382-92cf-700cbbe389b8!
	I1115 10:35:41.918000       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-087235_b593027d-8c92-4382-92cf-700cbbe389b8!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-087235 -n old-k8s-version-087235
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-087235 -n old-k8s-version-087235: exit status 2 (327.237776ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-087235 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-087235
helpers_test.go:243: (dbg) docker inspect old-k8s-version-087235:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3d4715b4872d2f4603f13f0191a01c4322a26432e1316c2ae62b1c571bea1814",
	        "Created": "2025-11-15T10:33:24.829295884Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 361917,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:34:42.560305954Z",
	            "FinishedAt": "2025-11-15T10:34:41.544966298Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/3d4715b4872d2f4603f13f0191a01c4322a26432e1316c2ae62b1c571bea1814/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3d4715b4872d2f4603f13f0191a01c4322a26432e1316c2ae62b1c571bea1814/hostname",
	        "HostsPath": "/var/lib/docker/containers/3d4715b4872d2f4603f13f0191a01c4322a26432e1316c2ae62b1c571bea1814/hosts",
	        "LogPath": "/var/lib/docker/containers/3d4715b4872d2f4603f13f0191a01c4322a26432e1316c2ae62b1c571bea1814/3d4715b4872d2f4603f13f0191a01c4322a26432e1316c2ae62b1c571bea1814-json.log",
	        "Name": "/old-k8s-version-087235",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-087235:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-087235",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3d4715b4872d2f4603f13f0191a01c4322a26432e1316c2ae62b1c571bea1814",
	                "LowerDir": "/var/lib/docker/overlay2/2a7fcfc651315e1f255f5887dcd49b9489c6d9bc0d5bb2729e48f789bfad9233-init/diff:/var/lib/docker/overlay2/507c85b6fb43d9c98216fac79d7a8d08dd20b2a63d0fbfc758336e5d38c04044/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2a7fcfc651315e1f255f5887dcd49b9489c6d9bc0d5bb2729e48f789bfad9233/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2a7fcfc651315e1f255f5887dcd49b9489c6d9bc0d5bb2729e48f789bfad9233/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2a7fcfc651315e1f255f5887dcd49b9489c6d9bc0d5bb2729e48f789bfad9233/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-087235",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-087235/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-087235",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-087235",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-087235",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "9e46ad3d7d257a4acedaacae202f5c7e5ff342db3043ae0b762b3eb0dc67b0c9",
	            "SandboxKey": "/var/run/docker/netns/9e46ad3d7d25",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-087235": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "11bae6d0a5454f5603cad7765ca7366f9be46b927618f2c698dc454d778aa49c",
	                    "EndpointID": "74e26a0798dfa3498a4af2e39ad2b821ec1833feae7cd7a3eda4e27e4faa8c71",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "86:2c:ba:e2:e0:26",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-087235",
	                        "3d4715b4872d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-087235 -n old-k8s-version-087235
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-087235 -n old-k8s-version-087235: exit status 2 (324.585296ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-087235 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-087235 logs -n 25: (1.146089636s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-931243 sudo cat /etc/docker/daemon.json                                                                                                                        │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ ssh     │ -p bridge-931243 sudo docker system info                                                                                                                                 │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ ssh     │ -p bridge-931243 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ ssh     │ -p bridge-931243 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ ssh     │ -p bridge-931243 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo cri-dockerd --version                                                                                                                              │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ ssh     │ -p bridge-931243 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo containerd config dump                                                                                                                             │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo crio config                                                                                                                                        │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ delete  │ -p bridge-931243                                                                                                                                                         │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ delete  │ -p disable-driver-mounts-435527                                                                                                                                          │ disable-driver-mounts-435527 │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ start   │ -p default-k8s-diff-port-026691 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-026691 │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-283677 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ start   │ -p no-preload-283677 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-719574 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ stop    │ -p embed-certs-719574 --alsologtostderr -v=3                                                                                                                             │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ image   │ old-k8s-version-087235 image list --format=json                                                                                                                          │ old-k8s-version-087235       │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ pause   │ -p old-k8s-version-087235 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-087235       │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:34:57
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:34:57.108674  368849 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:34:57.109040  368849 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:34:57.109051  368849 out.go:374] Setting ErrFile to fd 2...
	I1115 10:34:57.109058  368849 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:34:57.111080  368849 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 10:34:57.111766  368849 out.go:368] Setting JSON to false
	I1115 10:34:57.113998  368849 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8234,"bootTime":1763194663,"procs":296,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 10:34:57.114136  368849 start.go:143] virtualization: kvm guest
	I1115 10:34:57.115948  368849 out.go:179] * [no-preload-283677] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 10:34:57.117523  368849 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 10:34:57.117555  368849 notify.go:221] Checking for updates...
	I1115 10:34:57.119869  368849 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:34:57.121118  368849 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:34:57.122183  368849 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-55448/.minikube
	I1115 10:34:57.123828  368849 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 10:34:57.125045  368849 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:34:57.127033  368849 config.go:182] Loaded profile config "no-preload-283677": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:34:57.127935  368849 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:34:57.156939  368849 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 10:34:57.157094  368849 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:34:57.240931  368849 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:true NGoroutines:79 SystemTime:2025-11-15 10:34:57.228600984 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:34:57.241107  368849 docker.go:319] overlay module found
	I1115 10:34:57.243006  368849 out.go:179] * Using the docker driver based on existing profile
	I1115 10:34:56.682396  361423 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1115 10:34:56.682754  361423 addons.go:515] duration metric: took 6.415772773s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1115 10:34:56.684325  361423 api_server.go:141] control plane version: v1.28.0
	I1115 10:34:56.684354  361423 api_server.go:131] duration metric: took 8.788317ms to wait for apiserver health ...
	I1115 10:34:56.684364  361423 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:34:56.690921  361423 system_pods.go:59] 8 kube-system pods found
	I1115 10:34:56.691034  361423 system_pods.go:61] "coredns-5dd5756b68-bdpfv" [f9b5c9c2-d642-4a22-890d-89a8f91f771b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:34:56.691127  361423 system_pods.go:61] "etcd-old-k8s-version-087235" [ad6ea576-0a1a-4a8a-96c2-3076229520e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:34:56.691149  361423 system_pods.go:61] "kindnet-7btvm" [40ac7700-b07d-4504-8532-414d2fab7395] Running
	I1115 10:34:56.691158  361423 system_pods.go:61] "kube-apiserver-old-k8s-version-087235" [d035a8eb-e4b4-4eae-8726-c29fa6059168] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:34:56.691166  361423 system_pods.go:61] "kube-controller-manager-old-k8s-version-087235" [9cc04644-cddd-4b9c-964c-826ee56dbc47] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:34:56.691172  361423 system_pods.go:61] "kube-proxy-gl22j" [a854c189-3bd6-4c7d-8160-ae11b35db003] Running
	I1115 10:34:56.691179  361423 system_pods.go:61] "kube-scheduler-old-k8s-version-087235" [1da33e9b-b3a7-4e2b-b951-81bfc76bb515] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:34:56.691184  361423 system_pods.go:61] "storage-provisioner" [f2e47bd9-5a00-47cd-9b2e-5b80244c04a1] Running
	I1115 10:34:56.691199  361423 system_pods.go:74] duration metric: took 6.828122ms to wait for pod list to return data ...
	I1115 10:34:56.691207  361423 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:34:56.695797  361423 default_sa.go:45] found service account: "default"
	I1115 10:34:56.695993  361423 default_sa.go:55] duration metric: took 4.775405ms for default service account to be created ...
	I1115 10:34:56.696009  361423 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:34:56.706900  361423 system_pods.go:86] 8 kube-system pods found
	I1115 10:34:56.706946  361423 system_pods.go:89] "coredns-5dd5756b68-bdpfv" [f9b5c9c2-d642-4a22-890d-89a8f91f771b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:34:56.707061  361423 system_pods.go:89] "etcd-old-k8s-version-087235" [ad6ea576-0a1a-4a8a-96c2-3076229520e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:34:56.707075  361423 system_pods.go:89] "kindnet-7btvm" [40ac7700-b07d-4504-8532-414d2fab7395] Running
	I1115 10:34:56.707086  361423 system_pods.go:89] "kube-apiserver-old-k8s-version-087235" [d035a8eb-e4b4-4eae-8726-c29fa6059168] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:34:56.707148  361423 system_pods.go:89] "kube-controller-manager-old-k8s-version-087235" [9cc04644-cddd-4b9c-964c-826ee56dbc47] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:34:56.707168  361423 system_pods.go:89] "kube-proxy-gl22j" [a854c189-3bd6-4c7d-8160-ae11b35db003] Running
	I1115 10:34:56.707188  361423 system_pods.go:89] "kube-scheduler-old-k8s-version-087235" [1da33e9b-b3a7-4e2b-b951-81bfc76bb515] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:34:56.707217  361423 system_pods.go:89] "storage-provisioner" [f2e47bd9-5a00-47cd-9b2e-5b80244c04a1] Running
	I1115 10:34:56.707230  361423 system_pods.go:126] duration metric: took 11.211997ms to wait for k8s-apps to be running ...
	I1115 10:34:56.707238  361423 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:34:56.707321  361423 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:34:56.739287  361423 system_svc.go:56] duration metric: took 32.035692ms WaitForService to wait for kubelet
	I1115 10:34:56.739406  361423 kubeadm.go:587] duration metric: took 6.472459641s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:34:56.739438  361423 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:34:56.744554  361423 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:34:56.744591  361423 node_conditions.go:123] node cpu capacity is 8
	I1115 10:34:56.744610  361423 node_conditions.go:105] duration metric: took 5.164463ms to run NodePressure ...
	I1115 10:34:56.744623  361423 start.go:242] waiting for startup goroutines ...
	I1115 10:34:56.744633  361423 start.go:247] waiting for cluster config update ...
	I1115 10:34:56.744648  361423 start.go:256] writing updated cluster config ...
	I1115 10:34:56.744949  361423 ssh_runner.go:195] Run: rm -f paused
	I1115 10:34:56.752666  361423 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:34:56.758416  361423 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-bdpfv" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:57.244155  368849 start.go:309] selected driver: docker
	I1115 10:34:57.244180  368849 start.go:930] validating driver "docker" against &{Name:no-preload-283677 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-283677 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:34:57.244301  368849 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:34:57.245328  368849 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:34:57.321410  368849 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:true NGoroutines:79 SystemTime:2025-11-15 10:34:57.3090885 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be removed
by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:34:57.321759  368849 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:34:57.321796  368849 cni.go:84] Creating CNI manager for ""
	I1115 10:34:57.321849  368849 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:34:57.321897  368849 start.go:353] cluster config:
	{Name:no-preload-283677 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-283677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:34:57.324353  368849 out.go:179] * Starting "no-preload-283677" primary control-plane node in "no-preload-283677" cluster
	I1115 10:34:57.325413  368849 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:34:57.326593  368849 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:34:57.327877  368849 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:34:57.327926  368849 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:34:57.328103  368849 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/config.json ...
	I1115 10:34:57.328512  368849 cache.go:107] acquiring lock: {Name:mk04e19ef4726336e87a2efa989ec89b11194587 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:34:57.328600  368849 cache.go:115] /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1115 10:34:57.328611  368849 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 116.56µs
	I1115 10:34:57.328622  368849 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1115 10:34:57.328638  368849 cache.go:107] acquiring lock: {Name:mk160c40720b01bd77226b9ee86c8a56493b3987 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:34:57.328681  368849 cache.go:115] /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1115 10:34:57.328688  368849 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 53.964µs
	I1115 10:34:57.328696  368849 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1115 10:34:57.328709  368849 cache.go:107] acquiring lock: {Name:mk568a3320f172c7702e0c64f82e9ab66f08dc56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:34:57.328745  368849 cache.go:115] /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1115 10:34:57.328753  368849 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 45.66µs
	I1115 10:34:57.328760  368849 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1115 10:34:57.328772  368849 cache.go:107] acquiring lock: {Name:mk4538f0a5ff75ff8439835bfd59d64a365cd71b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:34:57.328806  368849 cache.go:115] /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1115 10:34:57.328812  368849 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 42.3µs
	I1115 10:34:57.328820  368849 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1115 10:34:57.328842  368849 cache.go:107] acquiring lock: {Name:mkebd0527ca8cd5425c0189738c4c613b1d0ad77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:34:57.328878  368849 cache.go:115] /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1115 10:34:57.328884  368849 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 55.883µs
	I1115 10:34:57.328893  368849 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1115 10:34:57.329374  368849 cache.go:107] acquiring lock: {Name:mk5c9d9d1f91519c0468e055d96da9be78d8987d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:34:57.329494  368849 cache.go:115] /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1115 10:34:57.329505  368849 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 157µs
	I1115 10:34:57.329514  368849 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1115 10:34:57.329533  368849 cache.go:107] acquiring lock: {Name:mk6d25d7926738a8037e85ed094d1b802d5c1f77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:34:57.329577  368849 cache.go:115] /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1115 10:34:57.329583  368849 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 53.182µs
	I1115 10:34:57.329591  368849 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1115 10:34:57.329625  368849 cache.go:107] acquiring lock: {Name:mkc6ed1fa15fd637355ac953d6d06e91f3f34a59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:34:57.329680  368849 cache.go:115] /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1115 10:34:57.329687  368849 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 65.791µs
	I1115 10:34:57.329700  368849 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21894-55448/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1115 10:34:57.329724  368849 cache.go:87] Successfully saved all images to host disk.
	I1115 10:34:57.355013  368849 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:34:57.355036  368849 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:34:57.355056  368849 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:34:57.355084  368849 start.go:360] acquireMachinesLock for no-preload-283677: {Name:mk8d9dc816de84055c03b404ddcac096c332be5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:34:57.355145  368849 start.go:364] duration metric: took 42.843µs to acquireMachinesLock for "no-preload-283677"
	I1115 10:34:57.355165  368849 start.go:96] Skipping create...Using existing machine configuration
	I1115 10:34:57.355174  368849 fix.go:54] fixHost starting: 
	I1115 10:34:57.355455  368849 cli_runner.go:164] Run: docker container inspect no-preload-283677 --format={{.State.Status}}
	I1115 10:34:57.375065  368849 fix.go:112] recreateIfNeeded on no-preload-283677: state=Stopped err=<nil>
	W1115 10:34:57.375094  368849 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 10:34:52.640072  367608 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 10:34:52.641977  367608 start.go:159] libmachine.API.Create for "default-k8s-diff-port-026691" (driver="docker")
	I1115 10:34:52.642026  367608 client.go:173] LocalClient.Create starting
	I1115 10:34:52.642126  367608 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem
	I1115 10:34:52.642171  367608 main.go:143] libmachine: Decoding PEM data...
	I1115 10:34:52.642193  367608 main.go:143] libmachine: Parsing certificate...
	I1115 10:34:52.642275  367608 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem
	I1115 10:34:52.642302  367608 main.go:143] libmachine: Decoding PEM data...
	I1115 10:34:52.642316  367608 main.go:143] libmachine: Parsing certificate...
	I1115 10:34:52.642807  367608 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-026691 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 10:34:52.663735  367608 cli_runner.go:211] docker network inspect default-k8s-diff-port-026691 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 10:34:52.663801  367608 network_create.go:284] running [docker network inspect default-k8s-diff-port-026691] to gather additional debugging logs...
	I1115 10:34:52.663820  367608 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-026691
	W1115 10:34:52.681651  367608 cli_runner.go:211] docker network inspect default-k8s-diff-port-026691 returned with exit code 1
	I1115 10:34:52.681682  367608 network_create.go:287] error running [docker network inspect default-k8s-diff-port-026691]: docker network inspect default-k8s-diff-port-026691: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-026691 not found
	I1115 10:34:52.681694  367608 network_create.go:289] output of [docker network inspect default-k8s-diff-port-026691]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-026691 not found
	
	** /stderr **
	I1115 10:34:52.681815  367608 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:34:52.703576  367608 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-77644897380e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:42:f9:e2:83:c3:91} reservation:<nil>}
	I1115 10:34:52.704399  367608 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e808d03b2052 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:03:24:ef:92:a1} reservation:<nil>}
	I1115 10:34:52.705358  367608 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3fb45eeaa66e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:1a:b3:b7:0f:2e:99} reservation:<nil>}
	I1115 10:34:52.706067  367608 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-31f43b806931 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:0a:cc:8c:d8:0d:c5} reservation:<nil>}
	I1115 10:34:52.707182  367608 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ec6c60}
	I1115 10:34:52.707213  367608 network_create.go:124] attempt to create docker network default-k8s-diff-port-026691 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1115 10:34:52.707274  367608 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-026691 default-k8s-diff-port-026691
	I1115 10:34:52.763872  367608 network_create.go:108] docker network default-k8s-diff-port-026691 192.168.85.0/24 created
	I1115 10:34:52.763908  367608 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-026691" container
	I1115 10:34:52.764001  367608 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 10:34:52.794341  367608 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-026691 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-026691 --label created_by.minikube.sigs.k8s.io=true
	I1115 10:34:52.814745  367608 oci.go:103] Successfully created a docker volume default-k8s-diff-port-026691
	I1115 10:34:52.814828  367608 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-026691-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-026691 --entrypoint /usr/bin/test -v default-k8s-diff-port-026691:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 10:34:53.252498  367608 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-026691
	I1115 10:34:53.252579  367608 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:34:53.252594  367608 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 10:34:53.252663  367608 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-026691:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1115 10:34:56.654774  367608 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-026691:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.402061031s)
	I1115 10:34:56.654813  367608 kic.go:203] duration metric: took 3.402214691s to extract preloaded images to volume ...
	W1115 10:34:56.654990  367608 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1115 10:34:56.655155  367608 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 10:34:56.764857  367608 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-026691 --name default-k8s-diff-port-026691 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-026691 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-026691 --network default-k8s-diff-port-026691 --ip 192.168.85.2 --volume default-k8s-diff-port-026691:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 10:34:57.094021  367608 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Running}}
	I1115 10:34:57.121300  367608 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:34:57.147203  367608 cli_runner.go:164] Run: docker exec default-k8s-diff-port-026691 stat /var/lib/dpkg/alternatives/iptables
	I1115 10:34:57.208529  367608 oci.go:144] the created container "default-k8s-diff-port-026691" has a running status.
	I1115 10:34:57.208578  367608 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa...
	I1115 10:34:54.186226  358343 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.814123ms
	I1115 10:34:54.189071  358343 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 10:34:54.189208  358343 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1115 10:34:54.189338  358343 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 10:34:54.189440  358343 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1115 10:34:57.855035  367608 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 10:34:57.883874  367608 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:34:57.907435  367608 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 10:34:57.907455  367608 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-026691 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 10:34:57.965903  367608 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:34:57.988026  367608 machine.go:94] provisionDockerMachine start ...
	I1115 10:34:57.988137  367608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:34:58.012542  367608 main.go:143] libmachine: Using SSH client type: native
	I1115 10:34:58.012924  367608 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1115 10:34:58.012944  367608 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:34:58.159148  367608 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-026691
	
	I1115 10:34:58.159194  367608 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-026691"
	I1115 10:34:58.159277  367608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:34:58.189206  367608 main.go:143] libmachine: Using SSH client type: native
	I1115 10:34:58.189501  367608 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1115 10:34:58.189523  367608 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-026691 && echo "default-k8s-diff-port-026691" | sudo tee /etc/hostname
	I1115 10:34:58.348350  367608 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-026691
	
	I1115 10:34:58.348454  367608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:34:58.368199  367608 main.go:143] libmachine: Using SSH client type: native
	I1115 10:34:58.368410  367608 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1115 10:34:58.368430  367608 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-026691' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-026691/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-026691' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:34:58.503716  367608 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:34:58.503754  367608 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-55448/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-55448/.minikube}
	I1115 10:34:58.503778  367608 ubuntu.go:190] setting up certificates
	I1115 10:34:58.503791  367608 provision.go:84] configureAuth start
	I1115 10:34:58.503853  367608 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-026691
	I1115 10:34:58.522763  367608 provision.go:143] copyHostCerts
	I1115 10:34:58.522820  367608 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem, removing ...
	I1115 10:34:58.522830  367608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem
	I1115 10:34:58.522904  367608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem (1082 bytes)
	I1115 10:34:58.523027  367608 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem, removing ...
	I1115 10:34:58.523038  367608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem
	I1115 10:34:58.523078  367608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem (1123 bytes)
	I1115 10:34:58.523158  367608 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem, removing ...
	I1115 10:34:58.523169  367608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem
	I1115 10:34:58.523203  367608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem (1679 bytes)
	I1115 10:34:58.523272  367608 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-026691 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-026691 localhost minikube]
	I1115 10:34:58.590090  367608 provision.go:177] copyRemoteCerts
	I1115 10:34:58.590145  367608 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:34:58.590187  367608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:34:58.608644  367608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:34:58.703764  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:34:58.724559  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1115 10:34:58.742665  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:34:58.759994  367608 provision.go:87] duration metric: took 256.187247ms to configureAuth
	I1115 10:34:58.760028  367608 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:34:58.760213  367608 config.go:182] Loaded profile config "default-k8s-diff-port-026691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:34:58.760342  367608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:34:58.778722  367608 main.go:143] libmachine: Using SSH client type: native
	I1115 10:34:58.779014  367608 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1115 10:34:58.779041  367608 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:34:59.033178  367608 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:34:59.033211  367608 machine.go:97] duration metric: took 1.045153146s to provisionDockerMachine
	I1115 10:34:59.033226  367608 client.go:176] duration metric: took 6.391191213s to LocalClient.Create
	I1115 10:34:59.033253  367608 start.go:167] duration metric: took 6.391304318s to libmachine.API.Create "default-k8s-diff-port-026691"
	I1115 10:34:59.033266  367608 start.go:293] postStartSetup for "default-k8s-diff-port-026691" (driver="docker")
	I1115 10:34:59.033285  367608 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:34:59.033376  367608 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:34:59.033438  367608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:34:59.053944  367608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:34:59.157205  367608 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:34:59.161685  367608 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:34:59.161717  367608 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:34:59.161733  367608 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/addons for local assets ...
	I1115 10:34:59.161795  367608 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/files for local assets ...
	I1115 10:34:59.161913  367608 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem -> 589622.pem in /etc/ssl/certs
	I1115 10:34:59.162069  367608 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:34:59.171183  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:34:59.197319  367608 start.go:296] duration metric: took 164.030813ms for postStartSetup
	I1115 10:34:59.197664  367608 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-026691
	I1115 10:34:59.222158  367608 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/config.json ...
	I1115 10:34:59.222456  367608 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:34:59.222508  367608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:34:59.245172  367608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:34:59.338333  367608 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:34:59.342944  367608 start.go:128] duration metric: took 6.710898676s to createHost
	I1115 10:34:59.342984  367608 start.go:83] releasing machines lock for "default-k8s-diff-port-026691", held for 6.711262903s
	I1115 10:34:59.343053  367608 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-026691
	I1115 10:34:59.360891  367608 ssh_runner.go:195] Run: cat /version.json
	I1115 10:34:59.360960  367608 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:34:59.360981  367608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:34:59.361027  367608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:34:59.380703  367608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:34:59.381093  367608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:34:59.543341  367608 ssh_runner.go:195] Run: systemctl --version
	I1115 10:34:59.550150  367608 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:34:59.588663  367608 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:34:59.594351  367608 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:34:59.594425  367608 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:34:59.627965  367608 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1115 10:34:59.627992  367608 start.go:496] detecting cgroup driver to use...
	I1115 10:34:59.628030  367608 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:34:59.628089  367608 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:34:59.644582  367608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:34:59.656945  367608 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:34:59.657016  367608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:34:59.673964  367608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:34:59.698909  367608 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:34:59.793897  367608 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:34:59.897920  367608 docker.go:234] disabling docker service ...
	I1115 10:34:59.898017  367608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:34:59.921681  367608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:34:59.935475  367608 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:35:00.040217  367608 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:35:00.145087  367608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:35:00.157908  367608 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:35:00.172301  367608 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:35:00.172359  367608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:00.185532  367608 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:35:00.185603  367608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:00.195014  367608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:00.204978  367608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:00.216321  367608 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:35:00.224805  367608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:00.233598  367608 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:00.248215  367608 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:00.257523  367608 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:35:00.265789  367608 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:35:00.273509  367608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:00.370097  367608 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:35:00.480383  367608 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:35:00.480459  367608 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:35:00.484506  367608 start.go:564] Will wait 60s for crictl version
	I1115 10:35:00.484571  367608 ssh_runner.go:195] Run: which crictl
	I1115 10:35:00.488156  367608 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:35:00.512458  367608 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:35:00.512546  367608 ssh_runner.go:195] Run: crio --version
	I1115 10:35:00.540995  367608 ssh_runner.go:195] Run: crio --version
	I1115 10:35:00.580705  367608 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:34:57.377717  368849 out.go:252] * Restarting existing docker container for "no-preload-283677" ...
	I1115 10:34:57.377792  368849 cli_runner.go:164] Run: docker start no-preload-283677
	I1115 10:34:57.726123  368849 cli_runner.go:164] Run: docker container inspect no-preload-283677 --format={{.State.Status}}
	I1115 10:34:57.753398  368849 kic.go:430] container "no-preload-283677" state is running.
	I1115 10:34:57.753840  368849 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-283677
	I1115 10:34:57.778603  368849 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/config.json ...
	I1115 10:34:57.778940  368849 machine.go:94] provisionDockerMachine start ...
	I1115 10:34:57.779390  368849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:34:57.804369  368849 main.go:143] libmachine: Using SSH client type: native
	I1115 10:34:57.805107  368849 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I1115 10:34:57.805139  368849 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:34:57.806009  368849 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40268->127.0.0.1:33114: read: connection reset by peer
	I1115 10:35:00.948741  368849 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-283677
	
	I1115 10:35:00.948777  368849 ubuntu.go:182] provisioning hostname "no-preload-283677"
	I1115 10:35:00.948835  368849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:35:00.969578  368849 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:00.969832  368849 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I1115 10:35:00.969850  368849 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-283677 && echo "no-preload-283677" | sudo tee /etc/hostname
	I1115 10:35:01.127681  368849 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-283677
	
	I1115 10:35:01.127767  368849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:35:01.146233  368849 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:01.146580  368849 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I1115 10:35:01.146607  368849 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-283677' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-283677/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-283677' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:35:01.284681  368849 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:35:01.284713  368849 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-55448/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-55448/.minikube}
	I1115 10:35:01.284748  368849 ubuntu.go:190] setting up certificates
	I1115 10:35:01.284762  368849 provision.go:84] configureAuth start
	I1115 10:35:01.284822  368849 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-283677
	I1115 10:35:01.303443  368849 provision.go:143] copyHostCerts
	I1115 10:35:01.303518  368849 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem, removing ...
	I1115 10:35:01.303535  368849 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem
	I1115 10:35:01.303611  368849 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem (1123 bytes)
	I1115 10:35:01.303735  368849 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem, removing ...
	I1115 10:35:01.303747  368849 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem
	I1115 10:35:01.303788  368849 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem (1679 bytes)
	I1115 10:35:01.303897  368849 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem, removing ...
	I1115 10:35:01.303909  368849 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem
	I1115 10:35:01.303945  368849 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem (1082 bytes)
	I1115 10:35:01.304057  368849 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem org=jenkins.no-preload-283677 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-283677]
	I1115 10:35:01.479935  368849 provision.go:177] copyRemoteCerts
	I1115 10:35:01.480049  368849 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:35:01.480102  368849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:35:01.499143  368849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/no-preload-283677/id_rsa Username:docker}
	I1115 10:35:01.593407  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:35:01.611444  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1115 10:35:01.629246  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1115 10:35:01.647087  368849 provision.go:87] duration metric: took 362.308284ms to configureAuth
	I1115 10:35:01.647136  368849 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:35:01.647339  368849 config.go:182] Loaded profile config "no-preload-283677": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:01.647467  368849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:35:01.667372  368849 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:01.667673  368849 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I1115 10:35:01.667695  368849 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:35:01.979196  368849 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:35:01.979228  368849 machine.go:97] duration metric: took 4.200198854s to provisionDockerMachine
	I1115 10:35:01.979281  368849 start.go:293] postStartSetup for "no-preload-283677" (driver="docker")
	I1115 10:35:01.979310  368849 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:35:01.979376  368849 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:35:01.979445  368849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:35:02.006457  368849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/no-preload-283677/id_rsa Username:docker}
	W1115 10:34:58.763972  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	W1115 10:35:00.765899  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	I1115 10:35:00.581817  367608 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-026691 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:35:00.607057  367608 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1115 10:35:00.613228  367608 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:35:00.626466  367608 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-026691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-026691 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:35:00.626625  367608 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:35:00.626700  367608 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:35:00.658108  367608 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:35:00.658131  367608 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:35:00.658175  367608 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:35:00.696481  367608 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:35:00.696507  367608 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:35:00.696517  367608 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1115 10:35:00.696629  367608 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-026691 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-026691 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:35:00.696715  367608 ssh_runner.go:195] Run: crio config
	I1115 10:35:00.744746  367608 cni.go:84] Creating CNI manager for ""
	I1115 10:35:00.744772  367608 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:35:00.744791  367608 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:35:00.744814  367608 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-026691 NodeName:default-k8s-diff-port-026691 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:35:00.744945  367608 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-026691"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:35:00.745029  367608 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:35:00.753434  367608 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:35:00.753504  367608 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:35:00.762137  367608 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1115 10:35:00.775671  367608 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:35:00.797030  367608 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1115 10:35:00.815366  367608 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:35:00.819023  367608 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:35:00.829919  367608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:00.924599  367608 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:35:00.946789  367608 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691 for IP: 192.168.85.2
	I1115 10:35:00.946817  367608 certs.go:195] generating shared ca certs ...
	I1115 10:35:00.946839  367608 certs.go:227] acquiring lock for ca certs: {Name:mkec8c0ab2b564f4fafe1f7fc86089efa994f6db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:00.947089  367608 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key
	I1115 10:35:00.947146  367608 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key
	I1115 10:35:00.947160  367608 certs.go:257] generating profile certs ...
	I1115 10:35:00.947236  367608 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/client.key
	I1115 10:35:00.947253  367608 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/client.crt with IP's: []
	I1115 10:35:01.041305  367608 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/client.crt ...
	I1115 10:35:01.041332  367608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/client.crt: {Name:mk850ac752ca8e1bd96e0112fe9cd33d06ae9831 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:01.041557  367608 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/client.key ...
	I1115 10:35:01.041576  367608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/client.key: {Name:mkc9f22f4d08691fb039bf58ca3696be01b8d2da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:01.041712  367608 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.key.f8824eec
	I1115 10:35:01.041737  367608 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.crt.f8824eec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1115 10:35:01.322559  367608 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.crt.f8824eec ...
	I1115 10:35:01.322598  367608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.crt.f8824eec: {Name:mk3e587e72b06a1c3e15f6608c5003fe07edb847 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:01.322844  367608 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.key.f8824eec ...
	I1115 10:35:01.322868  367608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.key.f8824eec: {Name:mka898e08cb25730cf00e76bc5148d21b3cfc491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:01.323013  367608 certs.go:382] copying /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.crt.f8824eec -> /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.crt
	I1115 10:35:01.323157  367608 certs.go:386] copying /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.key.f8824eec -> /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.key
	I1115 10:35:01.323229  367608 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.key
	I1115 10:35:01.323245  367608 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.crt with IP's: []
	I1115 10:35:01.668272  367608 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.crt ...
	I1115 10:35:01.668297  367608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.crt: {Name:mkd2364b507fdcd0e7075f46fb15018bc571dc50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:01.668447  367608 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.key ...
	I1115 10:35:01.668460  367608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.key: {Name:mk25118b0c3511bad3ea017a869823a0d0c461a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:01.668624  367608 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem (1338 bytes)
	W1115 10:35:01.668657  367608 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962_empty.pem, impossibly tiny 0 bytes
	I1115 10:35:01.668665  367608 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:35:01.668690  367608 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:35:01.668714  367608 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:35:01.668735  367608 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem (1679 bytes)
	I1115 10:35:01.668771  367608 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:35:01.669438  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:35:01.688356  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:35:01.706706  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:35:01.726247  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:35:01.748085  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1115 10:35:01.768285  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 10:35:01.788057  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:35:01.809920  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:35:01.831794  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:35:01.856775  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem --> /usr/share/ca-certificates/58962.pem (1338 bytes)
	I1115 10:35:01.878135  367608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /usr/share/ca-certificates/589622.pem (1708 bytes)
	I1115 10:35:01.900771  367608 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:35:01.917435  367608 ssh_runner.go:195] Run: openssl version
	I1115 10:35:01.925573  367608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:35:01.937193  367608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:35:01.942570  367608 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:41 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:35:01.942644  367608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:35:01.994260  367608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:35:02.006261  367608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/58962.pem && ln -fs /usr/share/ca-certificates/58962.pem /etc/ssl/certs/58962.pem"
	I1115 10:35:02.017141  367608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/58962.pem
	I1115 10:35:02.021709  367608 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:47 /usr/share/ca-certificates/58962.pem
	I1115 10:35:02.021780  367608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/58962.pem
	I1115 10:35:02.067748  367608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/58962.pem /etc/ssl/certs/51391683.0"
	I1115 10:35:02.078280  367608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/589622.pem && ln -fs /usr/share/ca-certificates/589622.pem /etc/ssl/certs/589622.pem"
	I1115 10:35:02.088732  367608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/589622.pem
	I1115 10:35:02.093398  367608 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:47 /usr/share/ca-certificates/589622.pem
	I1115 10:35:02.093499  367608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/589622.pem
	I1115 10:35:02.141627  367608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/589622.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:35:02.152207  367608 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:35:02.157541  367608 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 10:35:02.157606  367608 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-026691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-026691 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:35:02.157707  367608 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:35:02.157765  367608 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:35:02.193432  367608 cri.go:89] found id: ""
	I1115 10:35:02.193509  367608 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:35:02.203886  367608 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 10:35:02.213132  367608 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 10:35:02.213199  367608 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 10:35:02.223642  367608 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 10:35:02.223664  367608 kubeadm.go:158] found existing configuration files:
	
	I1115 10:35:02.223715  367608 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1115 10:35:02.233048  367608 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 10:35:02.233117  367608 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 10:35:02.242878  367608 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1115 10:35:02.252925  367608 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 10:35:02.253017  367608 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 10:35:02.262094  367608 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1115 10:35:02.272394  367608 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 10:35:02.272467  367608 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 10:35:02.282583  367608 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1115 10:35:02.293280  367608 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 10:35:02.293346  367608 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 10:35:02.303662  367608 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 10:35:02.354565  367608 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 10:35:02.354729  367608 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 10:35:02.385123  367608 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 10:35:02.385201  367608 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1115 10:35:02.385231  367608 kubeadm.go:319] OS: Linux
	I1115 10:35:02.385269  367608 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 10:35:02.385308  367608 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1115 10:35:02.385351  367608 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 10:35:02.385393  367608 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 10:35:02.385433  367608 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 10:35:02.385481  367608 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 10:35:02.385522  367608 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 10:35:02.385561  367608 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 10:35:02.385602  367608 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1115 10:35:02.460034  367608 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 10:35:02.460205  367608 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 10:35:02.460365  367608 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 10:35:02.468539  367608 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1115 10:35:00.333123  358343 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 6.143915574s
	I1115 10:35:00.909515  358343 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.720435889s
	I1115 10:35:02.691418  358343 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.502281905s
	I1115 10:35:02.704108  358343 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 10:35:02.720604  358343 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 10:35:02.737329  358343 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 10:35:02.737599  358343 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-719574 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 10:35:02.747326  358343 kubeadm.go:319] [bootstrap-token] Using token: ob95li.bwu5dbqfa14hsvt0
	I1115 10:35:02.110046  368849 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:35:02.114790  368849 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:35:02.114831  368849 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:35:02.114844  368849 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/addons for local assets ...
	I1115 10:35:02.114898  368849 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/files for local assets ...
	I1115 10:35:02.115028  368849 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem -> 589622.pem in /etc/ssl/certs
	I1115 10:35:02.115160  368849 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:35:02.124610  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:35:02.146846  368849 start.go:296] duration metric: took 167.527166ms for postStartSetup
	I1115 10:35:02.146933  368849 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:35:02.147016  368849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:35:02.169248  368849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/no-preload-283677/id_rsa Username:docker}
	I1115 10:35:02.269154  368849 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:35:02.275482  368849 fix.go:56] duration metric: took 4.92029981s for fixHost
	I1115 10:35:02.275512  368849 start.go:83] releasing machines lock for "no-preload-283677", held for 4.920355261s
	I1115 10:35:02.275586  368849 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-283677
	I1115 10:35:02.298638  368849 ssh_runner.go:195] Run: cat /version.json
	I1115 10:35:02.298698  368849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:35:02.298727  368849 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:35:02.298824  368849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:35:02.322717  368849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/no-preload-283677/id_rsa Username:docker}
	I1115 10:35:02.323463  368849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/no-preload-283677/id_rsa Username:docker}
	I1115 10:35:02.488756  368849 ssh_runner.go:195] Run: systemctl --version
	I1115 10:35:02.497019  368849 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:35:02.536446  368849 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:35:02.541399  368849 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:35:02.541491  368849 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:35:02.549838  368849 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 10:35:02.549867  368849 start.go:496] detecting cgroup driver to use...
	I1115 10:35:02.549905  368849 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:35:02.549977  368849 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:35:02.565514  368849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:35:02.577769  368849 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:35:02.577831  368849 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:35:02.592941  368849 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:35:02.605708  368849 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:35:02.688663  368849 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:35:02.788806  368849 docker.go:234] disabling docker service ...
	I1115 10:35:02.788873  368849 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:35:02.807424  368849 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:35:02.823661  368849 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:35:02.915268  368849 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:35:03.000433  368849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:35:03.014052  368849 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:35:03.029226  368849 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:35:03.029290  368849 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:03.038642  368849 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:35:03.038706  368849 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:03.049065  368849 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:03.058622  368849 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:03.068077  368849 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:35:03.076469  368849 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:03.085644  368849 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:03.094454  368849 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:03.104534  368849 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:35:03.112679  368849 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:35:03.121020  368849 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:03.222503  368849 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:35:03.357676  368849 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:35:03.357737  368849 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:35:03.361906  368849 start.go:564] Will wait 60s for crictl version
	I1115 10:35:03.361977  368849 ssh_runner.go:195] Run: which crictl
	I1115 10:35:03.365723  368849 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:35:03.404943  368849 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:35:03.405117  368849 ssh_runner.go:195] Run: crio --version
	I1115 10:35:03.438126  368849 ssh_runner.go:195] Run: crio --version
	I1115 10:35:03.469166  368849 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:35:02.748656  358343 out.go:252]   - Configuring RBAC rules ...
	I1115 10:35:02.748791  358343 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 10:35:02.753461  358343 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 10:35:02.760294  358343 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 10:35:02.763295  358343 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 10:35:02.766433  358343 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 10:35:02.769201  358343 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 10:35:03.098048  358343 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 10:35:03.518284  358343 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 10:35:04.097647  358343 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 10:35:04.098832  358343 kubeadm.go:319] 
	I1115 10:35:04.098915  358343 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 10:35:04.098925  358343 kubeadm.go:319] 
	I1115 10:35:04.099031  358343 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 10:35:04.099041  358343 kubeadm.go:319] 
	I1115 10:35:04.099069  358343 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 10:35:04.099152  358343 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 10:35:04.099270  358343 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 10:35:04.099293  358343 kubeadm.go:319] 
	I1115 10:35:04.099366  358343 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 10:35:04.099378  358343 kubeadm.go:319] 
	I1115 10:35:04.099446  358343 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 10:35:04.099456  358343 kubeadm.go:319] 
	I1115 10:35:04.099530  358343 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 10:35:04.099646  358343 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 10:35:04.099741  358343 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 10:35:04.099750  358343 kubeadm.go:319] 
	I1115 10:35:04.099881  358343 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 10:35:04.100020  358343 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 10:35:04.100033  358343 kubeadm.go:319] 
	I1115 10:35:04.100148  358343 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ob95li.bwu5dbqfa14hsvt0 \
	I1115 10:35:04.100288  358343 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c958101d3a67868123c5e439a6c1ea6e30a99a6538b76cbe461e2c919cf1aec \
	I1115 10:35:04.100317  358343 kubeadm.go:319] 	--control-plane 
	I1115 10:35:04.100323  358343 kubeadm.go:319] 
	I1115 10:35:04.100427  358343 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 10:35:04.100436  358343 kubeadm.go:319] 
	I1115 10:35:04.100540  358343 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ob95li.bwu5dbqfa14hsvt0 \
	I1115 10:35:04.100692  358343 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c958101d3a67868123c5e439a6c1ea6e30a99a6538b76cbe461e2c919cf1aec 
	I1115 10:35:04.103489  358343 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1115 10:35:04.103671  358343 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1115 10:35:04.103762  358343 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 10:35:04.103783  358343 cni.go:84] Creating CNI manager for ""
	I1115 10:35:04.103792  358343 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:35:04.105369  358343 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1115 10:35:03.470218  368849 cli_runner.go:164] Run: docker network inspect no-preload-283677 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:35:03.490738  368849 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1115 10:35:03.496698  368849 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:35:03.510823  368849 kubeadm.go:884] updating cluster {Name:no-preload-283677 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-283677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:35:03.511006  368849 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:35:03.511057  368849 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:35:03.547890  368849 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:35:03.547916  368849 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:35:03.547926  368849 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1115 10:35:03.548063  368849 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-283677 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-283677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:35:03.548166  368849 ssh_runner.go:195] Run: crio config
	I1115 10:35:03.599181  368849 cni.go:84] Creating CNI manager for ""
	I1115 10:35:03.599206  368849 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:35:03.599223  368849 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:35:03.599244  368849 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-283677 NodeName:no-preload-283677 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:35:03.599372  368849 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-283677"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:35:03.599441  368849 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:35:03.610310  368849 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:35:03.610397  368849 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:35:03.619706  368849 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1115 10:35:03.632722  368849 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:35:03.645918  368849 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1115 10:35:03.658741  368849 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:35:03.662232  368849 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:35:03.671761  368849 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:03.756659  368849 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:35:03.786378  368849 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677 for IP: 192.168.76.2
	I1115 10:35:03.786402  368849 certs.go:195] generating shared ca certs ...
	I1115 10:35:03.786422  368849 certs.go:227] acquiring lock for ca certs: {Name:mkec8c0ab2b564f4fafe1f7fc86089efa994f6db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:03.786604  368849 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key
	I1115 10:35:03.786672  368849 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key
	I1115 10:35:03.786685  368849 certs.go:257] generating profile certs ...
	I1115 10:35:03.786797  368849 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/client.key
	I1115 10:35:03.786865  368849 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/apiserver.key.d18d8ebf
	I1115 10:35:03.786925  368849 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/proxy-client.key
	I1115 10:35:03.787095  368849 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem (1338 bytes)
	W1115 10:35:03.787136  368849 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962_empty.pem, impossibly tiny 0 bytes
	I1115 10:35:03.787149  368849 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:35:03.787190  368849 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:35:03.787228  368849 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:35:03.787263  368849 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem (1679 bytes)
	I1115 10:35:03.787329  368849 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:35:03.788176  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:35:03.809608  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:35:03.829918  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:35:03.850004  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:35:03.882797  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1115 10:35:03.974262  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 10:35:03.996550  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:35:04.017706  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/no-preload-283677/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 10:35:04.035832  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem --> /usr/share/ca-certificates/58962.pem (1338 bytes)
	I1115 10:35:04.053680  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /usr/share/ca-certificates/589622.pem (1708 bytes)
	I1115 10:35:04.072674  368849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:35:04.091110  368849 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:35:04.106710  368849 ssh_runner.go:195] Run: openssl version
	I1115 10:35:04.113684  368849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:35:04.123025  368849 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:35:04.127895  368849 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:41 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:35:04.127949  368849 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:35:04.173742  368849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:35:04.183070  368849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/58962.pem && ln -fs /usr/share/ca-certificates/58962.pem /etc/ssl/certs/58962.pem"
	I1115 10:35:04.192820  368849 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/58962.pem
	I1115 10:35:04.197810  368849 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:47 /usr/share/ca-certificates/58962.pem
	I1115 10:35:04.197877  368849 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/58962.pem
	I1115 10:35:04.238270  368849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/58962.pem /etc/ssl/certs/51391683.0"
	I1115 10:35:04.249044  368849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/589622.pem && ln -fs /usr/share/ca-certificates/589622.pem /etc/ssl/certs/589622.pem"
	I1115 10:35:04.260640  368849 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/589622.pem
	I1115 10:35:04.265573  368849 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:47 /usr/share/ca-certificates/589622.pem
	I1115 10:35:04.265640  368849 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/589622.pem
	I1115 10:35:04.304857  368849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/589622.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:35:04.316678  368849 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:35:04.321538  368849 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 10:35:04.391497  368849 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 10:35:04.568753  368849 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 10:35:04.685855  368849 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 10:35:04.802487  368849 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 10:35:04.896806  368849 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 10:35:05.014514  368849 kubeadm.go:401] StartCluster: {Name:no-preload-283677 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-283677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:35:05.014628  368849 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:35:05.014704  368849 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:35:05.102808  368849 cri.go:89] found id: "324a3ff1cd89d80c17c217faf6db6f2b6c9f52f5abe13f2e83485e1e03b0c7aa"
	I1115 10:35:05.102868  368849 cri.go:89] found id: "8c532dc6e69808afe1a0ffb587f828c0b3f6fa37c71b2fbc5ce4abdafdedf008"
	I1115 10:35:05.102874  368849 cri.go:89] found id: "ac246fc71f81d0fb5e3cd430730c903f2ca388376feb3a8dc321eb565aa6c5ee"
	I1115 10:35:05.102879  368849 cri.go:89] found id: "c26ba954b1e2fc1ea4ef12fc0801c0a31171b23e67cf48fdeb9207cbdb3ba3b0"
	I1115 10:35:05.102883  368849 cri.go:89] found id: ""
	I1115 10:35:05.102973  368849 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 10:35:05.170451  368849 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:35:05Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:35:05.170545  368849 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:35:05.180340  368849 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 10:35:05.180361  368849 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 10:35:05.180411  368849 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 10:35:05.189950  368849 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:35:05.190767  368849 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-283677" does not appear in /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:35:05.192333  368849 kubeconfig.go:62] /home/jenkins/minikube-integration/21894-55448/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-283677" cluster setting kubeconfig missing "no-preload-283677" context setting]
	I1115 10:35:05.193068  368849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:05.194778  368849 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 10:35:05.205108  368849 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1115 10:35:05.205141  368849 kubeadm.go:602] duration metric: took 24.774201ms to restartPrimaryControlPlane
	I1115 10:35:05.205152  368849 kubeadm.go:403] duration metric: took 190.652551ms to StartCluster
	I1115 10:35:05.205176  368849 settings.go:142] acquiring lock: {Name:mk94cfac0f6eef2479181468a7a8082a6e5c8f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:05.205246  368849 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:35:05.206385  368849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:05.206642  368849 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:35:05.207102  368849 config.go:182] Loaded profile config "no-preload-283677": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:05.207057  368849 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:35:05.207165  368849 addons.go:70] Setting storage-provisioner=true in profile "no-preload-283677"
	I1115 10:35:05.207181  368849 addons.go:239] Setting addon storage-provisioner=true in "no-preload-283677"
	W1115 10:35:05.207190  368849 addons.go:248] addon storage-provisioner should already be in state true
	I1115 10:35:05.207190  368849 addons.go:70] Setting dashboard=true in profile "no-preload-283677"
	I1115 10:35:05.207217  368849 addons.go:239] Setting addon dashboard=true in "no-preload-283677"
	I1115 10:35:05.207212  368849 addons.go:70] Setting default-storageclass=true in profile "no-preload-283677"
	W1115 10:35:05.207233  368849 addons.go:248] addon dashboard should already be in state true
	I1115 10:35:05.207275  368849 host.go:66] Checking if "no-preload-283677" exists ...
	I1115 10:35:05.207221  368849 host.go:66] Checking if "no-preload-283677" exists ...
	I1115 10:35:05.207358  368849 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-283677"
	I1115 10:35:05.207703  368849 cli_runner.go:164] Run: docker container inspect no-preload-283677 --format={{.State.Status}}
	I1115 10:35:05.207808  368849 cli_runner.go:164] Run: docker container inspect no-preload-283677 --format={{.State.Status}}
	I1115 10:35:05.207815  368849 cli_runner.go:164] Run: docker container inspect no-preload-283677 --format={{.State.Status}}
	I1115 10:35:05.211477  368849 out.go:179] * Verifying Kubernetes components...
	I1115 10:35:05.213140  368849 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:05.232351  368849 addons.go:239] Setting addon default-storageclass=true in "no-preload-283677"
	W1115 10:35:05.232370  368849 addons.go:248] addon default-storageclass should already be in state true
	I1115 10:35:05.232392  368849 host.go:66] Checking if "no-preload-283677" exists ...
	I1115 10:35:05.232689  368849 cli_runner.go:164] Run: docker container inspect no-preload-283677 --format={{.State.Status}}
	I1115 10:35:05.232981  368849 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:35:05.232986  368849 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1115 10:35:05.234251  368849 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:35:05.234272  368849 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:35:05.234273  368849 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1115 10:35:05.234330  368849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:35:05.238080  368849 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1115 10:35:05.238101  368849 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1115 10:35:05.238157  368849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:35:05.254202  368849 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:35:05.254227  368849 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:35:05.254298  368849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:35:05.257406  368849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/no-preload-283677/id_rsa Username:docker}
	I1115 10:35:05.259999  368849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/no-preload-283677/id_rsa Username:docker}
	I1115 10:35:05.279539  368849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/no-preload-283677/id_rsa Username:docker}
	I1115 10:35:05.585009  368849 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1115 10:35:05.585042  368849 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1115 10:35:05.590684  368849 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:35:05.602650  368849 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1115 10:35:05.602676  368849 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1115 10:35:05.672946  368849 node_ready.go:35] waiting up to 6m0s for node "no-preload-283677" to be "Ready" ...
	I1115 10:35:05.684403  368849 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1115 10:35:05.684432  368849 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1115 10:35:05.690190  368849 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:35:05.692466  368849 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:35:05.769359  368849 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1115 10:35:05.769382  368849 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1115 10:35:05.787603  368849 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1115 10:35:05.787632  368849 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1115 10:35:05.883926  368849 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1115 10:35:05.883964  368849 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1115 10:35:05.974542  368849 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1115 10:35:05.974570  368849 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1115 10:35:05.992886  368849 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1115 10:35:05.992918  368849 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1115 10:35:06.012080  368849 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:35:06.012115  368849 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1115 10:35:06.084770  368849 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1115 10:35:03.268007  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	W1115 10:35:05.764688  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	I1115 10:35:02.470272  367608 out.go:252]   - Generating certificates and keys ...
	I1115 10:35:02.470390  367608 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 10:35:02.470490  367608 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 10:35:02.779536  367608 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 10:35:02.945500  367608 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 10:35:03.605573  367608 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 10:35:03.703228  367608 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 10:35:04.283194  367608 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 10:35:04.283412  367608 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-026691 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1115 10:35:04.682718  367608 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 10:35:04.683098  367608 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-026691 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1115 10:35:05.030500  367608 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 10:35:05.382333  367608 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 10:35:06.139095  367608 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 10:35:06.139385  367608 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 10:35:06.418023  367608 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 10:35:06.723330  367608 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 10:35:07.482824  367608 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 10:35:08.034181  367608 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 10:35:08.156422  367608 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 10:35:08.157215  367608 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 10:35:08.161626  367608 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1115 10:35:04.106620  358343 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 10:35:04.111162  358343 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1115 10:35:04.111192  358343 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 10:35:04.124905  358343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 10:35:04.382718  358343 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 10:35:04.382786  358343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:04.382833  358343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-719574 minikube.k8s.io/updated_at=2025_11_15T10_35_04_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510 minikube.k8s.io/name=embed-certs-719574 minikube.k8s.io/primary=true
	I1115 10:35:04.628907  358343 ops.go:34] apiserver oom_adj: -16
	I1115 10:35:04.629011  358343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:05.129861  358343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:05.629943  358343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:06.129410  358343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:06.629879  358343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:07.129154  358343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:07.630326  358343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:08.129680  358343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:08.629294  358343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:08.746205  358343 kubeadm.go:1114] duration metric: took 4.363478497s to wait for elevateKubeSystemPrivileges
	I1115 10:35:08.746256  358343 kubeadm.go:403] duration metric: took 20.927857879s to StartCluster
	I1115 10:35:08.746281  358343 settings.go:142] acquiring lock: {Name:mk94cfac0f6eef2479181468a7a8082a6e5c8f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:08.746351  358343 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:35:08.748593  358343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:08.748832  358343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 10:35:08.748841  358343 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:35:08.749290  358343 config.go:182] Loaded profile config "embed-certs-719574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:08.749362  358343 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:35:08.749448  358343 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-719574"
	I1115 10:35:08.749468  358343 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-719574"
	I1115 10:35:08.749501  358343 host.go:66] Checking if "embed-certs-719574" exists ...
	I1115 10:35:08.751286  358343 addons.go:70] Setting default-storageclass=true in profile "embed-certs-719574"
	I1115 10:35:08.751326  358343 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-719574"
	I1115 10:35:08.751768  358343 cli_runner.go:164] Run: docker container inspect embed-certs-719574 --format={{.State.Status}}
	I1115 10:35:08.752060  358343 cli_runner.go:164] Run: docker container inspect embed-certs-719574 --format={{.State.Status}}
	I1115 10:35:08.756196  358343 out.go:179] * Verifying Kubernetes components...
	I1115 10:35:08.757464  358343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:08.784018  358343 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:35:08.785232  358343 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:35:08.785253  358343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:35:08.785418  358343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:08.788366  358343 addons.go:239] Setting addon default-storageclass=true in "embed-certs-719574"
	I1115 10:35:08.788420  358343 host.go:66] Checking if "embed-certs-719574" exists ...
	I1115 10:35:08.788915  358343 cli_runner.go:164] Run: docker container inspect embed-certs-719574 --format={{.State.Status}}
	I1115 10:35:08.826800  358343 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:35:08.826832  358343 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:35:08.826903  358343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:08.829210  358343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/embed-certs-719574/id_rsa Username:docker}
	I1115 10:35:08.860334  358343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/embed-certs-719574/id_rsa Username:docker}
	I1115 10:35:07.982406  368849 node_ready.go:49] node "no-preload-283677" is "Ready"
	I1115 10:35:07.982441  368849 node_ready.go:38] duration metric: took 2.309447891s for node "no-preload-283677" to be "Ready" ...
	I1115 10:35:07.982458  368849 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:35:07.982514  368849 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:35:08.305043  368849 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.614815003s)
	I1115 10:35:09.469849  368849 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.777345954s)
	I1115 10:35:09.570449  368849 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.587920258s)
	I1115 10:35:09.570502  368849 api_server.go:72] duration metric: took 4.363836242s to wait for apiserver process to appear ...
	I1115 10:35:09.570512  368849 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:35:09.570533  368849 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:35:09.571399  368849 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.485550405s)
	I1115 10:35:09.577304  368849 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:35:09.577335  368849 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:35:09.615635  368849 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-283677 addons enable metrics-server
	
	I1115 10:35:09.664183  368849 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1115 10:35:09.099411  358343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 10:35:09.117209  358343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:35:09.178025  358343 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:35:09.223129  358343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:35:09.623693  358343 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1115 10:35:10.025397  358343 node_ready.go:35] waiting up to 6m0s for node "embed-certs-719574" to be "Ready" ...
	I1115 10:35:10.035887  358343 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1115 10:35:09.713917  368849 addons.go:515] duration metric: took 4.506959144s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1115 10:35:10.070918  368849 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:35:10.081303  368849 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1115 10:35:10.083487  368849 api_server.go:141] control plane version: v1.34.1
	I1115 10:35:10.083520  368849 api_server.go:131] duration metric: took 513.000945ms to wait for apiserver health ...
	I1115 10:35:10.083532  368849 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:35:10.088663  368849 system_pods.go:59] 8 kube-system pods found
	I1115 10:35:10.088717  368849 system_pods.go:61] "coredns-66bc5c9577-66nkj" [077957ec-b312-4412-a6b1-ae36eb2e7e16] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:35:10.088737  368849 system_pods.go:61] "etcd-no-preload-283677" [bf5ec52e-181c-4b5c-abb2-80ac3fcc26ef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:35:10.088745  368849 system_pods.go:61] "kindnet-x5rwg" [e504759b-46cd-4a41-a8cd-050722131a7d] Running
	I1115 10:35:10.088754  368849 system_pods.go:61] "kube-apiserver-no-preload-283677" [a1c78910-24db-4447-bfb5-f0dd4685d2b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:35:10.088761  368849 system_pods.go:61] "kube-controller-manager-no-preload-283677" [c7c2ba73-517d-48fc-b874-2ab3b653c5a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:35:10.088771  368849 system_pods.go:61] "kube-proxy-vjbxg" [68dffa75-569b-42ef-b4b2-c02a9c1938e7] Running
	I1115 10:35:10.088779  368849 system_pods.go:61] "kube-scheduler-no-preload-283677" [9e0abc54-bc72-4122-b46f-08a74328972d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:35:10.088786  368849 system_pods.go:61] "storage-provisioner" [24222831-4bc3-4c24-87ba-fd523a1e0c85] Running
	I1115 10:35:10.088797  368849 system_pods.go:74] duration metric: took 5.256404ms to wait for pod list to return data ...
	I1115 10:35:10.088807  368849 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:35:10.091629  368849 default_sa.go:45] found service account: "default"
	I1115 10:35:10.091653  368849 default_sa.go:55] duration metric: took 2.838862ms for default service account to be created ...
	I1115 10:35:10.091661  368849 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:35:10.094315  368849 system_pods.go:86] 8 kube-system pods found
	I1115 10:35:10.094343  368849 system_pods.go:89] "coredns-66bc5c9577-66nkj" [077957ec-b312-4412-a6b1-ae36eb2e7e16] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:35:10.094352  368849 system_pods.go:89] "etcd-no-preload-283677" [bf5ec52e-181c-4b5c-abb2-80ac3fcc26ef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:35:10.094358  368849 system_pods.go:89] "kindnet-x5rwg" [e504759b-46cd-4a41-a8cd-050722131a7d] Running
	I1115 10:35:10.094364  368849 system_pods.go:89] "kube-apiserver-no-preload-283677" [a1c78910-24db-4447-bfb5-f0dd4685d2b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:35:10.094370  368849 system_pods.go:89] "kube-controller-manager-no-preload-283677" [c7c2ba73-517d-48fc-b874-2ab3b653c5a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:35:10.094375  368849 system_pods.go:89] "kube-proxy-vjbxg" [68dffa75-569b-42ef-b4b2-c02a9c1938e7] Running
	I1115 10:35:10.094380  368849 system_pods.go:89] "kube-scheduler-no-preload-283677" [9e0abc54-bc72-4122-b46f-08a74328972d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:35:10.094385  368849 system_pods.go:89] "storage-provisioner" [24222831-4bc3-4c24-87ba-fd523a1e0c85] Running
	I1115 10:35:10.094397  368849 system_pods.go:126] duration metric: took 2.730305ms to wait for k8s-apps to be running ...
	I1115 10:35:10.094406  368849 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:35:10.094448  368849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:35:10.111038  368849 system_svc.go:56] duration metric: took 16.619407ms WaitForService to wait for kubelet
	I1115 10:35:10.111085  368849 kubeadm.go:587] duration metric: took 4.90441795s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:35:10.111109  368849 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:35:10.115110  368849 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:35:10.115139  368849 node_conditions.go:123] node cpu capacity is 8
	I1115 10:35:10.115152  368849 node_conditions.go:105] duration metric: took 4.037488ms to run NodePressure ...
	I1115 10:35:10.115164  368849 start.go:242] waiting for startup goroutines ...
	I1115 10:35:10.115171  368849 start.go:247] waiting for cluster config update ...
	I1115 10:35:10.115181  368849 start.go:256] writing updated cluster config ...
	I1115 10:35:10.115423  368849 ssh_runner.go:195] Run: rm -f paused
	I1115 10:35:10.120133  368849 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:35:10.125364  368849 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-66nkj" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 10:35:07.766656  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	W1115 10:35:09.768019  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	W1115 10:35:12.265348  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	I1115 10:35:08.162939  367608 out.go:252]   - Booting up control plane ...
	I1115 10:35:08.163067  367608 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 10:35:08.163214  367608 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 10:35:08.164559  367608 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 10:35:08.191442  367608 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 10:35:08.191597  367608 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1115 10:35:08.204536  367608 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1115 10:35:08.204949  367608 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 10:35:08.205027  367608 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 10:35:08.354479  367608 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1115 10:35:08.354645  367608 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1115 10:35:08.861234  367608 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 506.557584ms
	I1115 10:35:08.866845  367608 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 10:35:08.866999  367608 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1115 10:35:08.867498  367608 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 10:35:08.867607  367608 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1115 10:35:11.654618  367608 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.787069529s
	I1115 10:35:10.037459  358343 addons.go:515] duration metric: took 1.288097776s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1115 10:35:10.128075  358343 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-719574" context rescaled to 1 replicas
	W1115 10:35:12.028773  358343 node_ready.go:57] node "embed-certs-719574" has "Ready":"False" status (will retry)
	I1115 10:35:12.738784  367608 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.871726778s
	I1115 10:35:14.368847  367608 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501769669s
	I1115 10:35:14.386759  367608 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 10:35:14.403947  367608 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 10:35:14.415210  367608 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 10:35:14.415430  367608 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-026691 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 10:35:14.423662  367608 kubeadm.go:319] [bootstrap-token] Using token: la4gix.ai6olk5ks1jiibdz
	I1115 10:35:14.424934  367608 out.go:252]   - Configuring RBAC rules ...
	I1115 10:35:14.425149  367608 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 10:35:14.429405  367608 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 10:35:14.436815  367608 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 10:35:14.440353  367608 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 10:35:14.443132  367608 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 10:35:14.445801  367608 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 10:35:14.780630  367608 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 10:35:15.244870  367608 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 10:35:15.776235  367608 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 10:35:15.777454  367608 kubeadm.go:319] 
	I1115 10:35:15.777560  367608 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 10:35:15.777580  367608 kubeadm.go:319] 
	I1115 10:35:15.777679  367608 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 10:35:15.777709  367608 kubeadm.go:319] 
	I1115 10:35:15.777773  367608 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 10:35:15.777885  367608 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 10:35:15.777990  367608 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 10:35:15.778001  367608 kubeadm.go:319] 
	I1115 10:35:15.778075  367608 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 10:35:15.778084  367608 kubeadm.go:319] 
	I1115 10:35:15.778150  367608 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 10:35:15.778161  367608 kubeadm.go:319] 
	I1115 10:35:15.778232  367608 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 10:35:15.778338  367608 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 10:35:15.778434  367608 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 10:35:15.778441  367608 kubeadm.go:319] 
	I1115 10:35:15.778545  367608 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 10:35:15.778663  367608 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 10:35:15.778670  367608 kubeadm.go:319] 
	I1115 10:35:15.778785  367608 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token la4gix.ai6olk5ks1jiibdz \
	I1115 10:35:15.778928  367608 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c958101d3a67868123c5e439a6c1ea6e30a99a6538b76cbe461e2c919cf1aec \
	I1115 10:35:15.778967  367608 kubeadm.go:319] 	--control-plane 
	I1115 10:35:15.778973  367608 kubeadm.go:319] 
	I1115 10:35:15.779089  367608 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 10:35:15.779096  367608 kubeadm.go:319] 
	I1115 10:35:15.779206  367608 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token la4gix.ai6olk5ks1jiibdz \
	I1115 10:35:15.779345  367608 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c958101d3a67868123c5e439a6c1ea6e30a99a6538b76cbe461e2c919cf1aec 
	I1115 10:35:15.783505  367608 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1115 10:35:15.783826  367608 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1115 10:35:15.784060  367608 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 10:35:15.784094  367608 cni.go:84] Creating CNI manager for ""
	I1115 10:35:15.784108  367608 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:35:15.786778  367608 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1115 10:35:12.132013  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:14.175249  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:16.631763  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:14.265828  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	W1115 10:35:16.764850  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	I1115 10:35:15.788182  367608 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 10:35:15.793094  367608 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1115 10:35:15.793115  367608 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 10:35:15.809048  367608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 10:35:16.098742  367608 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 10:35:16.098819  367608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:16.098855  367608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-026691 minikube.k8s.io/updated_at=2025_11_15T10_35_16_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510 minikube.k8s.io/name=default-k8s-diff-port-026691 minikube.k8s.io/primary=true
	I1115 10:35:16.112393  367608 ops.go:34] apiserver oom_adj: -16
	I1115 10:35:16.271409  367608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:16.771783  367608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:17.271668  367608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1115 10:35:14.029094  358343 node_ready.go:57] node "embed-certs-719574" has "Ready":"False" status (will retry)
	W1115 10:35:16.031395  358343 node_ready.go:57] node "embed-certs-719574" has "Ready":"False" status (will retry)
	W1115 10:35:18.528752  358343 node_ready.go:57] node "embed-certs-719574" has "Ready":"False" status (will retry)
	I1115 10:35:17.772413  367608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:18.271612  367608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:18.772434  367608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:19.272327  367608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:19.771542  367608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:20.271635  367608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:20.771571  367608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:35:20.950683  367608 kubeadm.go:1114] duration metric: took 4.851926991s to wait for elevateKubeSystemPrivileges
	I1115 10:35:20.950730  367608 kubeadm.go:403] duration metric: took 18.793128713s to StartCluster
	I1115 10:35:20.950755  367608 settings.go:142] acquiring lock: {Name:mk94cfac0f6eef2479181468a7a8082a6e5c8f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:20.950836  367608 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:35:20.954212  367608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:20.954530  367608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 10:35:20.954547  367608 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:35:20.954629  367608 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:35:20.954736  367608 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-026691"
	I1115 10:35:20.954764  367608 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-026691"
	I1115 10:35:20.954792  367608 config.go:182] Loaded profile config "default-k8s-diff-port-026691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:20.954800  367608 host.go:66] Checking if "default-k8s-diff-port-026691" exists ...
	I1115 10:35:20.954806  367608 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-026691"
	I1115 10:35:20.955146  367608 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-026691"
	I1115 10:35:20.955492  367608 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:35:20.955510  367608 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:35:20.956132  367608 out.go:179] * Verifying Kubernetes components...
	I1115 10:35:20.957534  367608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:20.983066  367608 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-026691"
	I1115 10:35:20.983119  367608 host.go:66] Checking if "default-k8s-diff-port-026691" exists ...
	I1115 10:35:20.983674  367608 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:35:20.983883  367608 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:35:20.985223  367608 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:35:20.985248  367608 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:35:20.985304  367608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:35:21.009815  367608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:35:21.012487  367608 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:35:21.012509  367608 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:35:21.012558  367608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:35:21.043532  367608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:35:21.227388  367608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 10:35:21.242981  367608 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:35:21.243543  367608 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:35:21.345690  367608 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:35:21.760244  367608 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1115 10:35:21.984321  367608 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-026691" to be "Ready" ...
	I1115 10:35:21.984944  367608 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1115 10:35:19.130395  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:21.131716  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:19.266357  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	W1115 10:35:21.765103  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	I1115 10:35:21.986234  367608 addons.go:515] duration metric: took 1.031599786s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1115 10:35:22.264566  367608 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-026691" context rescaled to 1 replicas
	I1115 10:35:20.529126  358343 node_ready.go:49] node "embed-certs-719574" is "Ready"
	I1115 10:35:20.529163  358343 node_ready.go:38] duration metric: took 10.503731212s for node "embed-certs-719574" to be "Ready" ...
	I1115 10:35:20.529181  358343 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:35:20.529240  358343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:35:20.545196  358343 api_server.go:72] duration metric: took 11.796320759s to wait for apiserver process to appear ...
	I1115 10:35:20.545225  358343 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:35:20.545247  358343 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1115 10:35:20.549570  358343 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1115 10:35:20.550653  358343 api_server.go:141] control plane version: v1.34.1
	I1115 10:35:20.550677  358343 api_server.go:131] duration metric: took 5.444907ms to wait for apiserver health ...
	I1115 10:35:20.550686  358343 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:35:20.554086  358343 system_pods.go:59] 8 kube-system pods found
	I1115 10:35:20.554122  358343 system_pods.go:61] "coredns-66bc5c9577-fjzk5" [d4d185bc-88ec-4edb-b250-6a59ee426bf5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:35:20.554130  358343 system_pods.go:61] "etcd-embed-certs-719574" [293cb467-4f15-417b-944e-457c3ac7e56f] Running
	I1115 10:35:20.554138  358343 system_pods.go:61] "kindnet-ql2r4" [224e2951-6c97-449d-8ff8-f72aa6d36d60] Running
	I1115 10:35:20.554143  358343 system_pods.go:61] "kube-apiserver-embed-certs-719574" [43a83385-040c-470f-8242-5066617ac8d7] Running
	I1115 10:35:20.554152  358343 system_pods.go:61] "kube-controller-manager-embed-certs-719574" [78f16cd1-774e-4556-9db6-bca54bc8214b] Running
	I1115 10:35:20.554156  358343 system_pods.go:61] "kube-proxy-kmc8c" [3534c76c-4b99-4b84-ba00-21d0d49e770f] Running
	I1115 10:35:20.554161  358343 system_pods.go:61] "kube-scheduler-embed-certs-719574" [9b8d2d78-442d-48cc-88be-29b9eb7017e7] Running
	I1115 10:35:20.554169  358343 system_pods.go:61] "storage-provisioner" [39c3baf2-24de-475e-aeef-a10825991ca3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:35:20.554182  358343 system_pods.go:74] duration metric: took 3.483657ms to wait for pod list to return data ...
	I1115 10:35:20.554197  358343 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:35:20.556665  358343 default_sa.go:45] found service account: "default"
	I1115 10:35:20.556685  358343 default_sa.go:55] duration metric: took 2.480305ms for default service account to be created ...
	I1115 10:35:20.556695  358343 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:35:20.559910  358343 system_pods.go:86] 8 kube-system pods found
	I1115 10:35:20.559938  358343 system_pods.go:89] "coredns-66bc5c9577-fjzk5" [d4d185bc-88ec-4edb-b250-6a59ee426bf5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:35:20.559965  358343 system_pods.go:89] "etcd-embed-certs-719574" [293cb467-4f15-417b-944e-457c3ac7e56f] Running
	I1115 10:35:20.559978  358343 system_pods.go:89] "kindnet-ql2r4" [224e2951-6c97-449d-8ff8-f72aa6d36d60] Running
	I1115 10:35:20.559986  358343 system_pods.go:89] "kube-apiserver-embed-certs-719574" [43a83385-040c-470f-8242-5066617ac8d7] Running
	I1115 10:35:20.559993  358343 system_pods.go:89] "kube-controller-manager-embed-certs-719574" [78f16cd1-774e-4556-9db6-bca54bc8214b] Running
	I1115 10:35:20.560001  358343 system_pods.go:89] "kube-proxy-kmc8c" [3534c76c-4b99-4b84-ba00-21d0d49e770f] Running
	I1115 10:35:20.560007  358343 system_pods.go:89] "kube-scheduler-embed-certs-719574" [9b8d2d78-442d-48cc-88be-29b9eb7017e7] Running
	I1115 10:35:20.560018  358343 system_pods.go:89] "storage-provisioner" [39c3baf2-24de-475e-aeef-a10825991ca3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:35:20.560051  358343 retry.go:31] will retry after 304.306696ms: missing components: kube-dns
	I1115 10:35:20.869745  358343 system_pods.go:86] 8 kube-system pods found
	I1115 10:35:20.870073  358343 system_pods.go:89] "coredns-66bc5c9577-fjzk5" [d4d185bc-88ec-4edb-b250-6a59ee426bf5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:35:20.870105  358343 system_pods.go:89] "etcd-embed-certs-719574" [293cb467-4f15-417b-944e-457c3ac7e56f] Running
	I1115 10:35:20.870140  358343 system_pods.go:89] "kindnet-ql2r4" [224e2951-6c97-449d-8ff8-f72aa6d36d60] Running
	I1115 10:35:20.870174  358343 system_pods.go:89] "kube-apiserver-embed-certs-719574" [43a83385-040c-470f-8242-5066617ac8d7] Running
	I1115 10:35:20.870205  358343 system_pods.go:89] "kube-controller-manager-embed-certs-719574" [78f16cd1-774e-4556-9db6-bca54bc8214b] Running
	I1115 10:35:20.870223  358343 system_pods.go:89] "kube-proxy-kmc8c" [3534c76c-4b99-4b84-ba00-21d0d49e770f] Running
	I1115 10:35:20.870251  358343 system_pods.go:89] "kube-scheduler-embed-certs-719574" [9b8d2d78-442d-48cc-88be-29b9eb7017e7] Running
	I1115 10:35:20.870298  358343 system_pods.go:89] "storage-provisioner" [39c3baf2-24de-475e-aeef-a10825991ca3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:35:20.870334  358343 retry.go:31] will retry after 263.535875ms: missing components: kube-dns
	I1115 10:35:21.139822  358343 system_pods.go:86] 8 kube-system pods found
	I1115 10:35:21.139860  358343 system_pods.go:89] "coredns-66bc5c9577-fjzk5" [d4d185bc-88ec-4edb-b250-6a59ee426bf5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:35:21.139867  358343 system_pods.go:89] "etcd-embed-certs-719574" [293cb467-4f15-417b-944e-457c3ac7e56f] Running
	I1115 10:35:21.139875  358343 system_pods.go:89] "kindnet-ql2r4" [224e2951-6c97-449d-8ff8-f72aa6d36d60] Running
	I1115 10:35:21.139879  358343 system_pods.go:89] "kube-apiserver-embed-certs-719574" [43a83385-040c-470f-8242-5066617ac8d7] Running
	I1115 10:35:21.139885  358343 system_pods.go:89] "kube-controller-manager-embed-certs-719574" [78f16cd1-774e-4556-9db6-bca54bc8214b] Running
	I1115 10:35:21.139896  358343 system_pods.go:89] "kube-proxy-kmc8c" [3534c76c-4b99-4b84-ba00-21d0d49e770f] Running
	I1115 10:35:21.139902  358343 system_pods.go:89] "kube-scheduler-embed-certs-719574" [9b8d2d78-442d-48cc-88be-29b9eb7017e7] Running
	I1115 10:35:21.139910  358343 system_pods.go:89] "storage-provisioner" [39c3baf2-24de-475e-aeef-a10825991ca3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:35:21.139934  358343 retry.go:31] will retry after 299.264282ms: missing components: kube-dns
	I1115 10:35:21.445165  358343 system_pods.go:86] 8 kube-system pods found
	I1115 10:35:21.445282  358343 system_pods.go:89] "coredns-66bc5c9577-fjzk5" [d4d185bc-88ec-4edb-b250-6a59ee426bf5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:35:21.445340  358343 system_pods.go:89] "etcd-embed-certs-719574" [293cb467-4f15-417b-944e-457c3ac7e56f] Running
	I1115 10:35:21.445350  358343 system_pods.go:89] "kindnet-ql2r4" [224e2951-6c97-449d-8ff8-f72aa6d36d60] Running
	I1115 10:35:21.445355  358343 system_pods.go:89] "kube-apiserver-embed-certs-719574" [43a83385-040c-470f-8242-5066617ac8d7] Running
	I1115 10:35:21.445361  358343 system_pods.go:89] "kube-controller-manager-embed-certs-719574" [78f16cd1-774e-4556-9db6-bca54bc8214b] Running
	I1115 10:35:21.445366  358343 system_pods.go:89] "kube-proxy-kmc8c" [3534c76c-4b99-4b84-ba00-21d0d49e770f] Running
	I1115 10:35:21.445371  358343 system_pods.go:89] "kube-scheduler-embed-certs-719574" [9b8d2d78-442d-48cc-88be-29b9eb7017e7] Running
	I1115 10:35:21.445392  358343 system_pods.go:89] "storage-provisioner" [39c3baf2-24de-475e-aeef-a10825991ca3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:35:21.445412  358343 retry.go:31] will retry after 557.501681ms: missing components: kube-dns
	I1115 10:35:22.008757  358343 system_pods.go:86] 8 kube-system pods found
	I1115 10:35:22.008809  358343 system_pods.go:89] "coredns-66bc5c9577-fjzk5" [d4d185bc-88ec-4edb-b250-6a59ee426bf5] Running
	I1115 10:35:22.008817  358343 system_pods.go:89] "etcd-embed-certs-719574" [293cb467-4f15-417b-944e-457c3ac7e56f] Running
	I1115 10:35:22.008823  358343 system_pods.go:89] "kindnet-ql2r4" [224e2951-6c97-449d-8ff8-f72aa6d36d60] Running
	I1115 10:35:22.008830  358343 system_pods.go:89] "kube-apiserver-embed-certs-719574" [43a83385-040c-470f-8242-5066617ac8d7] Running
	I1115 10:35:22.008841  358343 system_pods.go:89] "kube-controller-manager-embed-certs-719574" [78f16cd1-774e-4556-9db6-bca54bc8214b] Running
	I1115 10:35:22.008847  358343 system_pods.go:89] "kube-proxy-kmc8c" [3534c76c-4b99-4b84-ba00-21d0d49e770f] Running
	I1115 10:35:22.008856  358343 system_pods.go:89] "kube-scheduler-embed-certs-719574" [9b8d2d78-442d-48cc-88be-29b9eb7017e7] Running
	I1115 10:35:22.008861  358343 system_pods.go:89] "storage-provisioner" [39c3baf2-24de-475e-aeef-a10825991ca3] Running
	I1115 10:35:22.008871  358343 system_pods.go:126] duration metric: took 1.452168821s to wait for k8s-apps to be running ...
	I1115 10:35:22.008883  358343 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:35:22.008946  358343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:35:22.026719  358343 system_svc.go:56] duration metric: took 17.821769ms WaitForService to wait for kubelet
	I1115 10:35:22.026753  358343 kubeadm.go:587] duration metric: took 13.277885015s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:35:22.026782  358343 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:35:22.030378  358343 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:35:22.030411  358343 node_conditions.go:123] node cpu capacity is 8
	I1115 10:35:22.030431  358343 node_conditions.go:105] duration metric: took 3.642261ms to run NodePressure ...
	I1115 10:35:22.030455  358343 start.go:242] waiting for startup goroutines ...
	I1115 10:35:22.030468  358343 start.go:247] waiting for cluster config update ...
	I1115 10:35:22.030481  358343 start.go:256] writing updated cluster config ...
	I1115 10:35:22.030818  358343 ssh_runner.go:195] Run: rm -f paused
	I1115 10:35:22.035757  358343 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:35:22.039154  358343 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fjzk5" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:22.043361  358343 pod_ready.go:94] pod "coredns-66bc5c9577-fjzk5" is "Ready"
	I1115 10:35:22.043378  358343 pod_ready.go:86] duration metric: took 4.200087ms for pod "coredns-66bc5c9577-fjzk5" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:22.045206  358343 pod_ready.go:83] waiting for pod "etcd-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:22.049024  358343 pod_ready.go:94] pod "etcd-embed-certs-719574" is "Ready"
	I1115 10:35:22.049042  358343 pod_ready.go:86] duration metric: took 3.816184ms for pod "etcd-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:22.050972  358343 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:22.054609  358343 pod_ready.go:94] pod "kube-apiserver-embed-certs-719574" is "Ready"
	I1115 10:35:22.054632  358343 pod_ready.go:86] duration metric: took 3.638655ms for pod "kube-apiserver-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:22.056558  358343 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:22.439711  358343 pod_ready.go:94] pod "kube-controller-manager-embed-certs-719574" is "Ready"
	I1115 10:35:22.439736  358343 pod_ready.go:86] duration metric: took 383.156713ms for pod "kube-controller-manager-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:22.640073  358343 pod_ready.go:83] waiting for pod "kube-proxy-kmc8c" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:23.040176  358343 pod_ready.go:94] pod "kube-proxy-kmc8c" is "Ready"
	I1115 10:35:23.040208  358343 pod_ready.go:86] duration metric: took 400.110752ms for pod "kube-proxy-kmc8c" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:23.240598  358343 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:23.639521  358343 pod_ready.go:94] pod "kube-scheduler-embed-certs-719574" is "Ready"
	I1115 10:35:23.639548  358343 pod_ready.go:86] duration metric: took 398.923501ms for pod "kube-scheduler-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:23.639560  358343 pod_ready.go:40] duration metric: took 1.603769873s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:35:23.683447  358343 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:35:23.685103  358343 out.go:179] * Done! kubectl is now configured to use "embed-certs-719574" cluster and "default" namespace by default
	W1115 10:35:23.630738  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:25.631024  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:24.264211  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	W1115 10:35:26.763775  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	W1115 10:35:23.987891  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	W1115 10:35:25.988063  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	W1115 10:35:29.264506  361423 pod_ready.go:104] pod "coredns-5dd5756b68-bdpfv" is not "Ready", error: <nil>
	I1115 10:35:30.263692  361423 pod_ready.go:94] pod "coredns-5dd5756b68-bdpfv" is "Ready"
	I1115 10:35:30.263719  361423 pod_ready.go:86] duration metric: took 33.505250042s for pod "coredns-5dd5756b68-bdpfv" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:30.266346  361423 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-087235" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:30.270190  361423 pod_ready.go:94] pod "etcd-old-k8s-version-087235" is "Ready"
	I1115 10:35:30.270213  361423 pod_ready.go:86] duration metric: took 3.846822ms for pod "etcd-old-k8s-version-087235" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:30.272557  361423 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-087235" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:30.276198  361423 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-087235" is "Ready"
	I1115 10:35:30.276215  361423 pod_ready.go:86] duration metric: took 3.640479ms for pod "kube-apiserver-old-k8s-version-087235" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:30.278541  361423 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-087235" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:30.461598  361423 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-087235" is "Ready"
	I1115 10:35:30.461629  361423 pod_ready.go:86] duration metric: took 183.068971ms for pod "kube-controller-manager-old-k8s-version-087235" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:30.662428  361423 pod_ready.go:83] waiting for pod "kube-proxy-gl22j" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:31.062369  361423 pod_ready.go:94] pod "kube-proxy-gl22j" is "Ready"
	I1115 10:35:31.062396  361423 pod_ready.go:86] duration metric: took 399.946151ms for pod "kube-proxy-gl22j" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:31.263048  361423 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-087235" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:31.662025  361423 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-087235" is "Ready"
	I1115 10:35:31.662055  361423 pod_ready.go:86] duration metric: took 398.980765ms for pod "kube-scheduler-old-k8s-version-087235" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:31.662070  361423 pod_ready.go:40] duration metric: took 34.909342767s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:35:31.706606  361423 start.go:628] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1115 10:35:31.708343  361423 out.go:203] 
	W1115 10:35:31.709588  361423 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1115 10:35:31.710764  361423 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1115 10:35:31.711983  361423 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-087235" cluster and "default" namespace by default
	W1115 10:35:28.131245  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:30.131470  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:28.487367  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	W1115 10:35:30.987373  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	W1115 10:35:32.630893  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:35.131203  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:32.987818  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	W1115 10:35:35.487942  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	W1115 10:35:37.630661  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:39.631229  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:41.631298  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:37.488558  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	W1115 10:35:39.987301  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	W1115 10:35:41.987883  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	W1115 10:35:44.130837  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:46.131128  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 15 10:35:31 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:31.050560271Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:35:31 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:31.05589738Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:35:31 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:31.056471982Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:35:31 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:31.076788197Z" level=info msg="Created container 235ae938bb4114fcf19fc01771df50bcd55d9e0253dd1fc357c036d91b1dbd6d: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-58wdf/dashboard-metrics-scraper" id=8010cbdb-7bec-4bce-90d1-dc4e4f99525c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:35:31 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:31.077414109Z" level=info msg="Starting container: 235ae938bb4114fcf19fc01771df50bcd55d9e0253dd1fc357c036d91b1dbd6d" id=6d33c515-7f00-42ae-894b-75b9535b33bf name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:35:31 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:31.079368598Z" level=info msg="Started container" PID=1812 containerID=235ae938bb4114fcf19fc01771df50bcd55d9e0253dd1fc357c036d91b1dbd6d description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-58wdf/dashboard-metrics-scraper id=6d33c515-7f00-42ae-894b-75b9535b33bf name=/runtime.v1.RuntimeService/StartContainer sandboxID=e0fbb810feadeffbb61460bd785a3b20faec65f04dae7b98210e0b136e0e93d7
	Nov 15 10:35:31 old-k8s-version-087235 conmon[1810]: conmon 235ae938bb4114fcf19f <ninfo>: container 1812 exited with status 1
	Nov 15 10:35:31 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:31.385811844Z" level=info msg="Removing container: 009b1abd319e558a009c9e01ea269a7e703df0fddbd0a74c73f9e2fcfd875758" id=faf72980-254f-43b1-974e-e968aefa14af name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:35:31 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:31.393293421Z" level=info msg="Error loading conmon cgroup of container 009b1abd319e558a009c9e01ea269a7e703df0fddbd0a74c73f9e2fcfd875758: cgroup deleted" id=faf72980-254f-43b1-974e-e968aefa14af name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:35:31 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:31.396879214Z" level=info msg="Removed container 009b1abd319e558a009c9e01ea269a7e703df0fddbd0a74c73f9e2fcfd875758: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-58wdf/dashboard-metrics-scraper" id=faf72980-254f-43b1-974e-e968aefa14af name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:35:34 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:34.02641801Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:35:34 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:34.031044812Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:35:34 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:34.031073767Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:35:34 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:34.031095792Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:35:34 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:34.034877736Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:35:34 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:34.034901349Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:35:34 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:34.034918273Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:35:34 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:34.038475411Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:35:34 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:34.038497108Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:35:34 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:34.038513406Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:35:34 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:34.042218644Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:35:34 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:34.042241969Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:35:34 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:34.042258816Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:35:34 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:34.04602024Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:35:34 old-k8s-version-087235 crio[680]: time="2025-11-15T10:35:34.046041888Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	235ae938bb411       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           16 seconds ago      Exited              dashboard-metrics-scraper   2                   e0fbb810feade       dashboard-metrics-scraper-5f989dc9cf-58wdf       kubernetes-dashboard
	141d480cf3b64       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         2                   61175e80abd59       storage-provisioner                              kube-system
	9e7dab2808e72       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   31 seconds ago      Running             kubernetes-dashboard        0                   89607ac1ff73b       kubernetes-dashboard-8694d4445c-sh86n            kubernetes-dashboard
	0e4febf6eeb91       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           54 seconds ago      Running             coredns                     1                   1d5b30ab515c3       coredns-5dd5756b68-bdpfv                         kube-system
	0675b2d0a0d42       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   8eb7ffb1e7780       busybox                                          default
	034573ebc5310       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         1                   61175e80abd59       storage-provisioner                              kube-system
	7594c7c2d6107       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 1                   b00571944eb80       kindnet-7btvm                                    kube-system
	ba78e319d11c5       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           54 seconds ago      Running             kube-proxy                  1                   3da76bdff9320       kube-proxy-gl22j                                 kube-system
	b8b1ccd6451f4       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           58 seconds ago      Running             kube-controller-manager     1                   d182e8cf08c23       kube-controller-manager-old-k8s-version-087235   kube-system
	8ce75f5e9ad57       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           58 seconds ago      Running             kube-scheduler              1                   9c27fede70f64       kube-scheduler-old-k8s-version-087235            kube-system
	3fd62a9dd4769       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           58 seconds ago      Running             kube-apiserver              1                   647e749f01c15       kube-apiserver-old-k8s-version-087235            kube-system
	dabb8b4809806       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           58 seconds ago      Running             etcd                        1                   ae9512cde292a       etcd-old-k8s-version-087235                      kube-system
	
	
	==> coredns [0e4febf6eeb916f0992d7e320785e3dbfccc6cfc0e69f63884d452c516e43258] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54176 - 8302 "HINFO IN 8801084867188015004.839990236938567246. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.014982633s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-087235
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-087235
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=old-k8s-version-087235
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_33_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:33:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-087235
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:35:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:35:23 +0000   Sat, 15 Nov 2025 10:33:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:35:23 +0000   Sat, 15 Nov 2025 10:33:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:35:23 +0000   Sat, 15 Nov 2025 10:33:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:35:23 +0000   Sat, 15 Nov 2025 10:34:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-087235
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                fdfc6964-6bf8-45b6-8dd6-3b0bdf50e4d6
	  Boot ID:                    6a33f504-3a8c-4d49-8fc5-301cf5275efc
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-5dd5756b68-bdpfv                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-old-k8s-version-087235                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m3s
	  kube-system                 kindnet-7btvm                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-old-k8s-version-087235             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-controller-manager-old-k8s-version-087235    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-proxy-gl22j                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-old-k8s-version-087235             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-58wdf        0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-sh86n             0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 109s                 kube-proxy       
	  Normal  Starting                 54s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m9s (x8 over 2m9s)  kubelet          Node old-k8s-version-087235 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m9s (x8 over 2m9s)  kubelet          Node old-k8s-version-087235 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m9s (x8 over 2m9s)  kubelet          Node old-k8s-version-087235 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m3s                 kubelet          Node old-k8s-version-087235 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m3s                 kubelet          Node old-k8s-version-087235 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s                 kubelet          Node old-k8s-version-087235 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m3s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           110s                 node-controller  Node old-k8s-version-087235 event: Registered Node old-k8s-version-087235 in Controller
	  Normal  NodeReady                94s                  kubelet          Node old-k8s-version-087235 status is now: NodeReady
	  Normal  Starting                 58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x9 over 58s)    kubelet          Node old-k8s-version-087235 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)    kubelet          Node old-k8s-version-087235 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x7 over 58s)    kubelet          Node old-k8s-version-087235 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           41s                  node-controller  Node old-k8s-version-087235 event: Registered Node old-k8s-version-087235 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 7f 53 f9 15 93 08 06
	[  +0.017821] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff c6 66 25 e9 45 28 08 06
	[ +19.178294] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 a3 65 b9 28 15 08 06
	[  +0.347929] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 6d df 33 a5 ad 08 06
	[  +0.000319] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff a2 7f 53 f9 15 93 08 06
	[Nov15 10:33] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 96 ed 10 e8 9a 80 08 06
	[  +0.000481] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 a3 65 b9 28 15 08 06
	[ +35.214217] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 c2 9a 56 52 0c 08 06
	[  +9.104720] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 c2 c4 d1 7c dd 08 06
	[Nov15 10:34] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 68 1e 80 53 05 08 06
	[  +0.000444] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 c2 c4 d1 7c dd 08 06
	[ +18.836046] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 98 db 78 4f 0b 08 06
	[  +0.000708] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 c2 9a 56 52 0c 08 06
	
	
	==> etcd [dabb8b48098068214bdf9584f09c135d2dcdd3d138801a98bbacd77829336d90] <==
	{"level":"info","ts":"2025-11-15T10:34:51.061292Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-15T10:34:51.062553Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-11-15T10:34:51.119129Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2025-11-15T10:34:54.548016Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.259207ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790032054591519 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/storage-provisioner\" mod_revision:476 > success:<request_put:<key:\"/registry/pods/kube-system/storage-provisioner\" value_size:3741 >> failure:<request_range:<key:\"/registry/pods/kube-system/storage-provisioner\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-15T10:34:54.548216Z","caller":"traceutil/trace.go:171","msg":"trace[1674528248] linearizableReadLoop","detail":"{readStateIndex:505; appliedIndex:504; }","duration":"122.157455ms","start":"2025-11-15T10:34:54.426042Z","end":"2025-11-15T10:34:54.548199Z","steps":["trace[1674528248] 'read index received'  (duration: 16.127879ms)","trace[1674528248] 'applied index is now lower than readState.Index'  (duration: 106.027911ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-15T10:34:54.548232Z","caller":"traceutil/trace.go:171","msg":"trace[1743705293] transaction","detail":"{read_only:false; response_revision:482; number_of_response:1; }","duration":"123.550657ms","start":"2025-11-15T10:34:54.424657Z","end":"2025-11-15T10:34:54.548207Z","steps":["trace[1743705293] 'process raft request'  (duration: 17.565683ms)","trace[1743705293] 'compare'  (duration: 105.151296ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-15T10:34:54.548306Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.267798ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/kube-public/system:controller:bootstrap-signer\" ","response":"range_response_count:1 size:709"}
	{"level":"info","ts":"2025-11-15T10:34:54.548339Z","caller":"traceutil/trace.go:171","msg":"trace[160029548] range","detail":"{range_begin:/registry/roles/kube-public/system:controller:bootstrap-signer; range_end:; response_count:1; response_revision:482; }","duration":"122.308441ms","start":"2025-11-15T10:34:54.426019Z","end":"2025-11-15T10:34:54.548328Z","steps":["trace[160029548] 'agreement among raft nodes before linearized reading'  (duration: 122.229592ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-15T10:34:55.236379Z","caller":"traceutil/trace.go:171","msg":"trace[498445923] linearizableReadLoop","detail":"{readStateIndex:511; appliedIndex:510; }","duration":"120.078661ms","start":"2025-11-15T10:34:55.116283Z","end":"2025-11-15T10:34:55.236361Z","steps":["trace[498445923] 'read index received'  (duration: 56.05862ms)","trace[498445923] 'applied index is now lower than readState.Index'  (duration: 64.019373ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-15T10:34:55.236533Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.249275ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/ttl-after-finished-controller\" ","response":"range_response_count:1 size:224"}
	{"level":"info","ts":"2025-11-15T10:34:55.236564Z","caller":"traceutil/trace.go:171","msg":"trace[304975410] transaction","detail":"{read_only:false; response_revision:487; number_of_response:1; }","duration":"122.10709ms","start":"2025-11-15T10:34:55.114439Z","end":"2025-11-15T10:34:55.236546Z","steps":["trace[304975410] 'process raft request'  (duration: 57.869653ms)","trace[304975410] 'compare'  (duration: 63.946128ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-15T10:34:55.236573Z","caller":"traceutil/trace.go:171","msg":"trace[646848973] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/ttl-after-finished-controller; range_end:; response_count:1; response_revision:487; }","duration":"120.311415ms","start":"2025-11-15T10:34:55.116252Z","end":"2025-11-15T10:34:55.236563Z","steps":["trace[646848973] 'agreement among raft nodes before linearized reading'  (duration: 120.177488ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-15T10:34:55.491553Z","caller":"traceutil/trace.go:171","msg":"trace[1354959381] linearizableReadLoop","detail":"{readStateIndex:513; appliedIndex:512; }","duration":"158.246756ms","start":"2025-11-15T10:34:55.333291Z","end":"2025-11-15T10:34:55.491538Z","steps":["trace[1354959381] 'read index received'  (duration: 158.143238ms)","trace[1354959381] 'applied index is now lower than readState.Index'  (duration: 102.92µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-15T10:34:55.491673Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.389679ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-11-15T10:34:55.491694Z","caller":"traceutil/trace.go:171","msg":"trace[433858641] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/deployment-controller; range_end:; response_count:1; response_revision:489; }","duration":"158.427797ms","start":"2025-11-15T10:34:55.333261Z","end":"2025-11-15T10:34:55.491689Z","steps":["trace[433858641] 'agreement among raft nodes before linearized reading'  (duration: 158.337252ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-15T10:34:55.491687Z","caller":"traceutil/trace.go:171","msg":"trace[1782841908] transaction","detail":"{read_only:false; response_revision:489; number_of_response:1; }","duration":"167.584811ms","start":"2025-11-15T10:34:55.324083Z","end":"2025-11-15T10:34:55.491668Z","steps":["trace[1782841908] 'process raft request'  (duration: 167.346476ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-15T10:34:55.832277Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"238.66064ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/kubernetes-dashboard/kubernetes-dashboard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-15T10:34:55.832353Z","caller":"traceutil/trace.go:171","msg":"trace[1927462408] range","detail":"{range_begin:/registry/rolebindings/kubernetes-dashboard/kubernetes-dashboard; range_end:; response_count:0; response_revision:490; }","duration":"238.753535ms","start":"2025-11-15T10:34:55.59358Z","end":"2025-11-15T10:34:55.832334Z","steps":["trace[1927462408] 'range keys from in-memory index tree'  (duration: 238.577347ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-15T10:34:55.832286Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"232.35063ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/persistent-volume-binder\" ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2025-11-15T10:34:55.832449Z","caller":"traceutil/trace.go:171","msg":"trace[838577774] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/persistent-volume-binder; range_end:; response_count:1; response_revision:490; }","duration":"232.543354ms","start":"2025-11-15T10:34:55.599892Z","end":"2025-11-15T10:34:55.832435Z","steps":["trace[838577774] 'range keys from in-memory index tree'  (duration: 232.220453ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-15T10:34:56.067618Z","caller":"traceutil/trace.go:171","msg":"trace[1917144964] transaction","detail":"{read_only:false; response_revision:493; number_of_response:1; }","duration":"126.238188ms","start":"2025-11-15T10:34:55.941362Z","end":"2025-11-15T10:34:56.0676Z","steps":["trace[1917144964] 'process raft request'  (duration: 126.110419ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-15T10:34:56.318724Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.889915ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790032054591606 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/secrets/kubernetes-dashboard/kubernetes-dashboard-csrf\" mod_revision:0 > success:<request_put:<key:\"/registry/secrets/kubernetes-dashboard/kubernetes-dashboard-csrf\" value_size:956 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-15T10:34:56.318798Z","caller":"traceutil/trace.go:171","msg":"trace[1770777282] transaction","detail":"{read_only:false; response_revision:494; number_of_response:1; }","duration":"242.777355ms","start":"2025-11-15T10:34:56.076006Z","end":"2025-11-15T10:34:56.318783Z","steps":["trace[1770777282] 'process raft request'  (duration: 116.770885ms)","trace[1770777282] 'compare'  (duration: 125.768788ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-15T10:34:56.517761Z","caller":"traceutil/trace.go:171","msg":"trace[656185099] transaction","detail":"{read_only:false; response_revision:496; number_of_response:1; }","duration":"167.176403ms","start":"2025-11-15T10:34:56.350564Z","end":"2025-11-15T10:34:56.51774Z","steps":["trace[656185099] 'process raft request'  (duration: 122.311543ms)","trace[656185099] 'compare'  (duration: 44.654268ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-15T10:34:56.636944Z","caller":"traceutil/trace.go:171","msg":"trace[470485118] transaction","detail":"{read_only:false; response_revision:497; number_of_response:1; }","duration":"117.651719ms","start":"2025-11-15T10:34:56.519275Z","end":"2025-11-15T10:34:56.636926Z","steps":["trace[470485118] 'process raft request'  (duration: 111.818601ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:35:48 up  2:18,  0 user,  load average: 4.01, 4.40, 2.76
	Linux old-k8s-version-087235 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7594c7c2d610745a399557dd1247f6642b08937e57147358c301470340e5bbb3] <==
	I1115 10:34:53.735476       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:34:53.735923       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1115 10:34:53.736240       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:34:53.736272       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:34:53.736288       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:34:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:34:54.025754       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:34:54.026638       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:34:54.026747       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:34:54.026926       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1115 10:35:24.025947       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1115 10:35:24.026918       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1115 10:35:24.026993       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1115 10:35:24.027004       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1115 10:35:25.627591       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:35:25.627624       1 metrics.go:72] Registering metrics
	I1115 10:35:25.627704       1 controller.go:711] "Syncing nftables rules"
	I1115 10:35:34.026041       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1115 10:35:34.026107       1 main.go:301] handling current node
	I1115 10:35:44.031041       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1115 10:35:44.031090       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3fd62a9dd47699ac165f43ff643bf99a6efeeed696c5fdcd642be6b2a9374ff1] <==
	I1115 10:34:52.932502       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1115 10:34:52.947668       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1115 10:34:52.947704       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	E1115 10:34:52.948162       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high","system"] items=[{},{},{},{},{},{},{},{}]
	I1115 10:34:52.954293       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 10:34:53.018994       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1115 10:34:53.019425       1 shared_informer.go:318] Caches are synced for configmaps
	I1115 10:34:53.020172       1 cache.go:39] Caches are synced for autoregister controller
	E1115 10:34:53.026021       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1115 10:34:53.823564       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:34:54.922256       1 controller.go:624] quota admission added evaluator for: namespaces
	I1115 10:34:55.247976       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1115 10:34:55.499823       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:34:55.837511       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:34:55.859680       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1115 10:34:56.637636       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.195.233"}
	I1115 10:34:56.665265       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.57.213"}
	E1115 10:35:02.948824       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","global-default","leader-election","node-high","system","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	I1115 10:35:06.007617       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1115 10:35:06.147275       1 controller.go:624] quota admission added evaluator for: endpoints
	I1115 10:35:06.150530       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1115 10:35:12.949124       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","global-default","leader-election","node-high","system","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	E1115 10:35:22.950151       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high"] items=[{},{},{},{},{},{},{},{}]
	E1115 10:35:32.950643       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","node-high","system","workload-high","workload-low","catch-all","exempt","global-default"] items=[{},{},{},{},{},{},{},{}]
	E1115 10:35:42.951365       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","global-default","leader-election","node-high","system","workload-high","workload-low","catch-all"] items=[{},{},{},{},{},{},{},{}]
	
	
	==> kube-controller-manager [b8b1ccd6451f4579f89a5a5b4368b0f6ed96c45d344cd9110c94b49fdceb39ed] <==
	I1115 10:35:06.128932       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="106.038636ms"
	I1115 10:35:06.132255       1 shared_informer.go:318] Caches are synced for resource quota
	I1115 10:35:06.133422       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="113.04467ms"
	I1115 10:35:06.139829       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="10.819545ms"
	I1115 10:35:06.140127       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="109.31µs"
	I1115 10:35:06.141142       1 shared_informer.go:318] Caches are synced for crt configmap
	I1115 10:35:06.142131       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="8.616169ms"
	I1115 10:35:06.142325       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="75.326µs"
	I1115 10:35:06.152808       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I1115 10:35:06.219580       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="80.4µs"
	I1115 10:35:06.219815       1 shared_informer.go:318] Caches are synced for stateful set
	I1115 10:35:06.219827       1 shared_informer.go:318] Caches are synced for resource quota
	I1115 10:35:06.219943       1 shared_informer.go:318] Caches are synced for daemon sets
	I1115 10:35:06.543790       1 shared_informer.go:318] Caches are synced for garbage collector
	I1115 10:35:06.612690       1 shared_informer.go:318] Caches are synced for garbage collector
	I1115 10:35:06.612732       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1115 10:35:11.292556       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="99.369µs"
	I1115 10:35:12.337691       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="96.79µs"
	I1115 10:35:13.339992       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="78.703µs"
	I1115 10:35:16.369001       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="9.025904ms"
	I1115 10:35:16.369362       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="132.688µs"
	I1115 10:35:29.877018       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.887332ms"
	I1115 10:35:29.877133       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="70.485µs"
	I1115 10:35:31.396374       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="121.27µs"
	I1115 10:35:36.443505       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="161.761µs"
	
	
	==> kube-proxy [ba78e319d11c588a26d306264073a90262f5ec5da127e677e9bdbe733738df60] <==
	I1115 10:34:53.644611       1 server_others.go:69] "Using iptables proxy"
	I1115 10:34:53.658002       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1115 10:34:53.733465       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:34:53.737099       1 server_others.go:152] "Using iptables Proxier"
	I1115 10:34:53.737136       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1115 10:34:53.737143       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1115 10:34:53.737183       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1115 10:34:53.737470       1 server.go:846] "Version info" version="v1.28.0"
	I1115 10:34:53.737494       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:34:53.738284       1 config.go:188] "Starting service config controller"
	I1115 10:34:53.738365       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1115 10:34:53.738415       1 config.go:315] "Starting node config controller"
	I1115 10:34:53.738443       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1115 10:34:53.738794       1 config.go:97] "Starting endpoint slice config controller"
	I1115 10:34:53.738840       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1115 10:34:53.839151       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1115 10:34:53.839219       1 shared_informer.go:318] Caches are synced for service config
	I1115 10:34:53.839736       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [8ce75f5e9ad57aaaace9af39da481c138fb57073d1fee7bc88e75f67b8b6e7f7] <==
	I1115 10:34:50.253637       1 serving.go:348] Generated self-signed cert in-memory
	W1115 10:34:52.832609       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1115 10:34:52.834009       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1115 10:34:52.834184       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1115 10:34:52.834259       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1115 10:34:53.023507       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1115 10:34:53.023610       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:34:53.027685       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:34:53.027778       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1115 10:34:53.029053       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1115 10:34:53.029481       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1115 10:34:53.128680       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 15 10:35:06 old-k8s-version-087235 kubelet[836]: I1115 10:35:06.224603     836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjv4z\" (UniqueName: \"kubernetes.io/projected/cdb69a62-a600-4d3b-aaec-535c3b64028f-kube-api-access-rjv4z\") pod \"kubernetes-dashboard-8694d4445c-sh86n\" (UID: \"cdb69a62-a600-4d3b-aaec-535c3b64028f\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-sh86n"
	Nov 15 10:35:06 old-k8s-version-087235 kubelet[836]: I1115 10:35:06.224699     836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjcrp\" (UniqueName: \"kubernetes.io/projected/36f671bc-2446-4742-af31-8d43717071b8-kube-api-access-jjcrp\") pod \"dashboard-metrics-scraper-5f989dc9cf-58wdf\" (UID: \"36f671bc-2446-4742-af31-8d43717071b8\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-58wdf"
	Nov 15 10:35:06 old-k8s-version-087235 kubelet[836]: I1115 10:35:06.224766     836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/cdb69a62-a600-4d3b-aaec-535c3b64028f-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-sh86n\" (UID: \"cdb69a62-a600-4d3b-aaec-535c3b64028f\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-sh86n"
	Nov 15 10:35:06 old-k8s-version-087235 kubelet[836]: I1115 10:35:06.224803     836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/36f671bc-2446-4742-af31-8d43717071b8-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-58wdf\" (UID: \"36f671bc-2446-4742-af31-8d43717071b8\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-58wdf"
	Nov 15 10:35:06 old-k8s-version-087235 kubelet[836]: W1115 10:35:06.450642     836 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/3d4715b4872d2f4603f13f0191a01c4322a26432e1316c2ae62b1c571bea1814/crio-e0fbb810feadeffbb61460bd785a3b20faec65f04dae7b98210e0b136e0e93d7 WatchSource:0}: Error finding container e0fbb810feadeffbb61460bd785a3b20faec65f04dae7b98210e0b136e0e93d7: Status 404 returned error can't find the container with id e0fbb810feadeffbb61460bd785a3b20faec65f04dae7b98210e0b136e0e93d7
	Nov 15 10:35:06 old-k8s-version-087235 kubelet[836]: W1115 10:35:06.451561     836 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/3d4715b4872d2f4603f13f0191a01c4322a26432e1316c2ae62b1c571bea1814/crio-89607ac1ff73bb74dac1ac6d3bc00c1684b24c2031a1c798fdf3a7489e6efe24 WatchSource:0}: Error finding container 89607ac1ff73bb74dac1ac6d3bc00c1684b24c2031a1c798fdf3a7489e6efe24: Status 404 returned error can't find the container with id 89607ac1ff73bb74dac1ac6d3bc00c1684b24c2031a1c798fdf3a7489e6efe24
	Nov 15 10:35:11 old-k8s-version-087235 kubelet[836]: I1115 10:35:11.281020     836 scope.go:117] "RemoveContainer" containerID="fd6737949022a48de4ef52058917effe45da619e197fc6ac8936ccc51cdc86d7"
	Nov 15 10:35:12 old-k8s-version-087235 kubelet[836]: I1115 10:35:12.323547     836 scope.go:117] "RemoveContainer" containerID="fd6737949022a48de4ef52058917effe45da619e197fc6ac8936ccc51cdc86d7"
	Nov 15 10:35:12 old-k8s-version-087235 kubelet[836]: I1115 10:35:12.323763     836 scope.go:117] "RemoveContainer" containerID="009b1abd319e558a009c9e01ea269a7e703df0fddbd0a74c73f9e2fcfd875758"
	Nov 15 10:35:12 old-k8s-version-087235 kubelet[836]: E1115 10:35:12.324188     836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-58wdf_kubernetes-dashboard(36f671bc-2446-4742-af31-8d43717071b8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-58wdf" podUID="36f671bc-2446-4742-af31-8d43717071b8"
	Nov 15 10:35:13 old-k8s-version-087235 kubelet[836]: I1115 10:35:13.328611     836 scope.go:117] "RemoveContainer" containerID="009b1abd319e558a009c9e01ea269a7e703df0fddbd0a74c73f9e2fcfd875758"
	Nov 15 10:35:13 old-k8s-version-087235 kubelet[836]: E1115 10:35:13.329095     836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-58wdf_kubernetes-dashboard(36f671bc-2446-4742-af31-8d43717071b8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-58wdf" podUID="36f671bc-2446-4742-af31-8d43717071b8"
	Nov 15 10:35:16 old-k8s-version-087235 kubelet[836]: I1115 10:35:16.427734     836 scope.go:117] "RemoveContainer" containerID="009b1abd319e558a009c9e01ea269a7e703df0fddbd0a74c73f9e2fcfd875758"
	Nov 15 10:35:16 old-k8s-version-087235 kubelet[836]: E1115 10:35:16.428147     836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-58wdf_kubernetes-dashboard(36f671bc-2446-4742-af31-8d43717071b8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-58wdf" podUID="36f671bc-2446-4742-af31-8d43717071b8"
	Nov 15 10:35:24 old-k8s-version-087235 kubelet[836]: I1115 10:35:24.364325     836 scope.go:117] "RemoveContainer" containerID="034573ebc531040d6466ecf78c8b86fefe56032a558c0c6e459de1608b9d81f5"
	Nov 15 10:35:24 old-k8s-version-087235 kubelet[836]: I1115 10:35:24.375421     836 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-sh86n" podStartSLOduration=8.704564179 podCreationTimestamp="2025-11-15 10:35:06 +0000 UTC" firstStartedPulling="2025-11-15 10:35:06.455660291 +0000 UTC m=+17.533431579" lastFinishedPulling="2025-11-15 10:35:16.126455817 +0000 UTC m=+27.204227116" observedRunningTime="2025-11-15 10:35:16.36075414 +0000 UTC m=+27.438525456" watchObservedRunningTime="2025-11-15 10:35:24.375359716 +0000 UTC m=+35.453131020"
	Nov 15 10:35:31 old-k8s-version-087235 kubelet[836]: I1115 10:35:31.047813     836 scope.go:117] "RemoveContainer" containerID="009b1abd319e558a009c9e01ea269a7e703df0fddbd0a74c73f9e2fcfd875758"
	Nov 15 10:35:31 old-k8s-version-087235 kubelet[836]: I1115 10:35:31.384529     836 scope.go:117] "RemoveContainer" containerID="009b1abd319e558a009c9e01ea269a7e703df0fddbd0a74c73f9e2fcfd875758"
	Nov 15 10:35:31 old-k8s-version-087235 kubelet[836]: I1115 10:35:31.384759     836 scope.go:117] "RemoveContainer" containerID="235ae938bb4114fcf19fc01771df50bcd55d9e0253dd1fc357c036d91b1dbd6d"
	Nov 15 10:35:31 old-k8s-version-087235 kubelet[836]: E1115 10:35:31.385171     836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-58wdf_kubernetes-dashboard(36f671bc-2446-4742-af31-8d43717071b8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-58wdf" podUID="36f671bc-2446-4742-af31-8d43717071b8"
	Nov 15 10:35:36 old-k8s-version-087235 kubelet[836]: I1115 10:35:36.427004     836 scope.go:117] "RemoveContainer" containerID="235ae938bb4114fcf19fc01771df50bcd55d9e0253dd1fc357c036d91b1dbd6d"
	Nov 15 10:35:36 old-k8s-version-087235 kubelet[836]: E1115 10:35:36.427286     836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-58wdf_kubernetes-dashboard(36f671bc-2446-4742-af31-8d43717071b8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-58wdf" podUID="36f671bc-2446-4742-af31-8d43717071b8"
	Nov 15 10:35:43 old-k8s-version-087235 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 10:35:43 old-k8s-version-087235 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 10:35:43 old-k8s-version-087235 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [9e7dab2808e72b5ecf4c23f3a0c6c73dc08206c22ebcf5da92da7fd1464ea642] <==
	2025/11/15 10:35:16 Using namespace: kubernetes-dashboard
	2025/11/15 10:35:16 Using in-cluster config to connect to apiserver
	2025/11/15 10:35:16 Using secret token for csrf signing
	2025/11/15 10:35:16 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/15 10:35:16 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/15 10:35:16 Successful initial request to the apiserver, version: v1.28.0
	2025/11/15 10:35:16 Generating JWE encryption key
	2025/11/15 10:35:16 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/15 10:35:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/15 10:35:16 Initializing JWE encryption key from synchronized object
	2025/11/15 10:35:16 Creating in-cluster Sidecar client
	2025/11/15 10:35:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 10:35:16 Serving insecurely on HTTP port: 9090
	2025/11/15 10:35:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 10:35:16 Starting overwatch
	
	
	==> storage-provisioner [034573ebc531040d6466ecf78c8b86fefe56032a558c0c6e459de1608b9d81f5] <==
	I1115 10:34:53.549609       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1115 10:35:23.552985       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [141d480cf3b64b6bc24f8f5013f9a931686b80ed7bf8b12a85bcd2b351953257] <==
	I1115 10:35:24.413328       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 10:35:24.421282       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 10:35:24.421333       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1115 10:35:41.817573       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 10:35:41.817707       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c51d799d-ecee-4db4-97cb-68755d563c6e", APIVersion:"v1", ResourceVersion:"653", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-087235_b593027d-8c92-4382-92cf-700cbbe389b8 became leader
	I1115 10:35:41.817748       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-087235_b593027d-8c92-4382-92cf-700cbbe389b8!
	I1115 10:35:41.918000       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-087235_b593027d-8c92-4382-92cf-700cbbe389b8!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-087235 -n old-k8s-version-087235
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-087235 -n old-k8s-version-087235: exit status 2 (338.019903ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-087235 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (5.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-283677 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-283677 --alsologtostderr -v=1: exit status 80 (2.386692019s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-283677 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:36:04.380185  382382 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:36:04.380478  382382 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:36:04.380499  382382 out.go:374] Setting ErrFile to fd 2...
	I1115 10:36:04.380506  382382 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:36:04.380794  382382 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 10:36:04.381103  382382 out.go:368] Setting JSON to false
	I1115 10:36:04.381167  382382 mustload.go:66] Loading cluster: no-preload-283677
	I1115 10:36:04.381647  382382 config.go:182] Loaded profile config "no-preload-283677": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:04.382234  382382 cli_runner.go:164] Run: docker container inspect no-preload-283677 --format={{.State.Status}}
	I1115 10:36:04.403079  382382 host.go:66] Checking if "no-preload-283677" exists ...
	I1115 10:36:04.403470  382382 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:36:04.472633  382382 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:true NGoroutines:87 SystemTime:2025-11-15 10:36:04.460997834 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:36:04.473519  382382 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-283677 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1115 10:36:04.475342  382382 out.go:179] * Pausing node no-preload-283677 ... 
	I1115 10:36:04.476738  382382 host.go:66] Checking if "no-preload-283677" exists ...
	I1115 10:36:04.477087  382382 ssh_runner.go:195] Run: systemctl --version
	I1115 10:36:04.477140  382382 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-283677
	I1115 10:36:04.498046  382382 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/no-preload-283677/id_rsa Username:docker}
	I1115 10:36:04.595221  382382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:36:04.608727  382382 pause.go:52] kubelet running: true
	I1115 10:36:04.608796  382382 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:36:04.760046  382382 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:36:04.760172  382382 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:36:04.834450  382382 cri.go:89] found id: "8bd2c710a2c580a8988b5b3071d9f3587a5b7a4d80023277e15840a6b9f2c4fa"
	I1115 10:36:04.834476  382382 cri.go:89] found id: "a1b1db57d497261f854972caaaabfb2ff94437f156ebd9a824ae6eec9b4717be"
	I1115 10:36:04.834482  382382 cri.go:89] found id: "e19aa2c4914343607f446514b29eff501e18401aa8e8ae99efee7a13e1b84831"
	I1115 10:36:04.834487  382382 cri.go:89] found id: "2ed35452acbea6332ff49a4e3b850561f71e2992e41503717b83a4170ecdae97"
	I1115 10:36:04.834491  382382 cri.go:89] found id: "fbd534126f75ad8fd1d5fdcbd5ef4977e3b134a0b5f0bb5ef906b59631045d73"
	I1115 10:36:04.834496  382382 cri.go:89] found id: "324a3ff1cd89d80c17c217faf6db6f2b6c9f52f5abe13f2e83485e1e03b0c7aa"
	I1115 10:36:04.834500  382382 cri.go:89] found id: "8c532dc6e69808afe1a0ffb587f828c0b3f6fa37c71b2fbc5ce4abdafdedf008"
	I1115 10:36:04.834504  382382 cri.go:89] found id: "ac246fc71f81d0fb5e3cd430730c903f2ca388376feb3a8dc321eb565aa6c5ee"
	I1115 10:36:04.834508  382382 cri.go:89] found id: "c26ba954b1e2fc1ea4ef12fc0801c0a31171b23e67cf48fdeb9207cbdb3ba3b0"
	I1115 10:36:04.834525  382382 cri.go:89] found id: "2d20d5dc1c2b66cabb2156b9f8d8669dcb25cbd73f94552cea3da087e47a37c2"
	I1115 10:36:04.834529  382382 cri.go:89] found id: "72e788657e34c4ceb611b3c182b01cfe009c0ebba075aa6c882e7e27152c31ee"
	I1115 10:36:04.834534  382382 cri.go:89] found id: ""
	I1115 10:36:04.834579  382382 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:36:04.848737  382382 retry.go:31] will retry after 271.521094ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:36:04Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:36:05.121177  382382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:36:05.134345  382382 pause.go:52] kubelet running: false
	I1115 10:36:05.134398  382382 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:36:05.318278  382382 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:36:05.318373  382382 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:36:05.401288  382382 cri.go:89] found id: "8bd2c710a2c580a8988b5b3071d9f3587a5b7a4d80023277e15840a6b9f2c4fa"
	I1115 10:36:05.401307  382382 cri.go:89] found id: "a1b1db57d497261f854972caaaabfb2ff94437f156ebd9a824ae6eec9b4717be"
	I1115 10:36:05.401311  382382 cri.go:89] found id: "e19aa2c4914343607f446514b29eff501e18401aa8e8ae99efee7a13e1b84831"
	I1115 10:36:05.401314  382382 cri.go:89] found id: "2ed35452acbea6332ff49a4e3b850561f71e2992e41503717b83a4170ecdae97"
	I1115 10:36:05.401318  382382 cri.go:89] found id: "fbd534126f75ad8fd1d5fdcbd5ef4977e3b134a0b5f0bb5ef906b59631045d73"
	I1115 10:36:05.401324  382382 cri.go:89] found id: "324a3ff1cd89d80c17c217faf6db6f2b6c9f52f5abe13f2e83485e1e03b0c7aa"
	I1115 10:36:05.401328  382382 cri.go:89] found id: "8c532dc6e69808afe1a0ffb587f828c0b3f6fa37c71b2fbc5ce4abdafdedf008"
	I1115 10:36:05.401332  382382 cri.go:89] found id: "ac246fc71f81d0fb5e3cd430730c903f2ca388376feb3a8dc321eb565aa6c5ee"
	I1115 10:36:05.401336  382382 cri.go:89] found id: "c26ba954b1e2fc1ea4ef12fc0801c0a31171b23e67cf48fdeb9207cbdb3ba3b0"
	I1115 10:36:05.401344  382382 cri.go:89] found id: "2d20d5dc1c2b66cabb2156b9f8d8669dcb25cbd73f94552cea3da087e47a37c2"
	I1115 10:36:05.401348  382382 cri.go:89] found id: "72e788657e34c4ceb611b3c182b01cfe009c0ebba075aa6c882e7e27152c31ee"
	I1115 10:36:05.401352  382382 cri.go:89] found id: ""
	I1115 10:36:05.401399  382382 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:36:05.413143  382382 retry.go:31] will retry after 220.622628ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:36:05Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:36:05.634456  382382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:36:05.648545  382382 pause.go:52] kubelet running: false
	I1115 10:36:05.648620  382382 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:36:05.799686  382382 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:36:05.799783  382382 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:36:05.892446  382382 cri.go:89] found id: "8bd2c710a2c580a8988b5b3071d9f3587a5b7a4d80023277e15840a6b9f2c4fa"
	I1115 10:36:05.892473  382382 cri.go:89] found id: "a1b1db57d497261f854972caaaabfb2ff94437f156ebd9a824ae6eec9b4717be"
	I1115 10:36:05.892478  382382 cri.go:89] found id: "e19aa2c4914343607f446514b29eff501e18401aa8e8ae99efee7a13e1b84831"
	I1115 10:36:05.892482  382382 cri.go:89] found id: "2ed35452acbea6332ff49a4e3b850561f71e2992e41503717b83a4170ecdae97"
	I1115 10:36:05.892485  382382 cri.go:89] found id: "fbd534126f75ad8fd1d5fdcbd5ef4977e3b134a0b5f0bb5ef906b59631045d73"
	I1115 10:36:05.892488  382382 cri.go:89] found id: "324a3ff1cd89d80c17c217faf6db6f2b6c9f52f5abe13f2e83485e1e03b0c7aa"
	I1115 10:36:05.892491  382382 cri.go:89] found id: "8c532dc6e69808afe1a0ffb587f828c0b3f6fa37c71b2fbc5ce4abdafdedf008"
	I1115 10:36:05.892493  382382 cri.go:89] found id: "ac246fc71f81d0fb5e3cd430730c903f2ca388376feb3a8dc321eb565aa6c5ee"
	I1115 10:36:05.892495  382382 cri.go:89] found id: "c26ba954b1e2fc1ea4ef12fc0801c0a31171b23e67cf48fdeb9207cbdb3ba3b0"
	I1115 10:36:05.892512  382382 cri.go:89] found id: "2d20d5dc1c2b66cabb2156b9f8d8669dcb25cbd73f94552cea3da087e47a37c2"
	I1115 10:36:05.892517  382382 cri.go:89] found id: "72e788657e34c4ceb611b3c182b01cfe009c0ebba075aa6c882e7e27152c31ee"
	I1115 10:36:05.892522  382382 cri.go:89] found id: ""
	I1115 10:36:05.892567  382382 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:36:05.904811  382382 retry.go:31] will retry after 496.369158ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:36:05Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:36:06.401507  382382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:36:06.416079  382382 pause.go:52] kubelet running: false
	I1115 10:36:06.416147  382382 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:36:06.587397  382382 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:36:06.587482  382382 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:36:06.667746  382382 cri.go:89] found id: "8bd2c710a2c580a8988b5b3071d9f3587a5b7a4d80023277e15840a6b9f2c4fa"
	I1115 10:36:06.668208  382382 cri.go:89] found id: "a1b1db57d497261f854972caaaabfb2ff94437f156ebd9a824ae6eec9b4717be"
	I1115 10:36:06.668216  382382 cri.go:89] found id: "e19aa2c4914343607f446514b29eff501e18401aa8e8ae99efee7a13e1b84831"
	I1115 10:36:06.668221  382382 cri.go:89] found id: "2ed35452acbea6332ff49a4e3b850561f71e2992e41503717b83a4170ecdae97"
	I1115 10:36:06.668225  382382 cri.go:89] found id: "fbd534126f75ad8fd1d5fdcbd5ef4977e3b134a0b5f0bb5ef906b59631045d73"
	I1115 10:36:06.668251  382382 cri.go:89] found id: "324a3ff1cd89d80c17c217faf6db6f2b6c9f52f5abe13f2e83485e1e03b0c7aa"
	I1115 10:36:06.668255  382382 cri.go:89] found id: "8c532dc6e69808afe1a0ffb587f828c0b3f6fa37c71b2fbc5ce4abdafdedf008"
	I1115 10:36:06.668259  382382 cri.go:89] found id: "ac246fc71f81d0fb5e3cd430730c903f2ca388376feb3a8dc321eb565aa6c5ee"
	I1115 10:36:06.668264  382382 cri.go:89] found id: "c26ba954b1e2fc1ea4ef12fc0801c0a31171b23e67cf48fdeb9207cbdb3ba3b0"
	I1115 10:36:06.668272  382382 cri.go:89] found id: "2d20d5dc1c2b66cabb2156b9f8d8669dcb25cbd73f94552cea3da087e47a37c2"
	I1115 10:36:06.668305  382382 cri.go:89] found id: "72e788657e34c4ceb611b3c182b01cfe009c0ebba075aa6c882e7e27152c31ee"
	I1115 10:36:06.668328  382382 cri.go:89] found id: ""
	I1115 10:36:06.668381  382382 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:36:06.686750  382382 out.go:203] 
	W1115 10:36:06.687852  382382 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:36:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:36:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 10:36:06.687867  382382 out.go:285] * 
	* 
	W1115 10:36:06.693297  382382 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 10:36:06.694512  382382 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-283677 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-283677
helpers_test.go:243: (dbg) docker inspect no-preload-283677:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5be6667f097090011551b6e80bf165ef8b8393ba894fdd8185eba7e40ac44832",
	        "Created": "2025-11-15T10:33:34.248576658Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 369308,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:34:57.408433102Z",
	            "FinishedAt": "2025-11-15T10:34:55.946285352Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/5be6667f097090011551b6e80bf165ef8b8393ba894fdd8185eba7e40ac44832/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5be6667f097090011551b6e80bf165ef8b8393ba894fdd8185eba7e40ac44832/hostname",
	        "HostsPath": "/var/lib/docker/containers/5be6667f097090011551b6e80bf165ef8b8393ba894fdd8185eba7e40ac44832/hosts",
	        "LogPath": "/var/lib/docker/containers/5be6667f097090011551b6e80bf165ef8b8393ba894fdd8185eba7e40ac44832/5be6667f097090011551b6e80bf165ef8b8393ba894fdd8185eba7e40ac44832-json.log",
	        "Name": "/no-preload-283677",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-283677:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-283677",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5be6667f097090011551b6e80bf165ef8b8393ba894fdd8185eba7e40ac44832",
	                "LowerDir": "/var/lib/docker/overlay2/45ca12b29ace88526ba1031080c77398d7e970e8d9cdc89fe714afcbb97fdb8d-init/diff:/var/lib/docker/overlay2/507c85b6fb43d9c98216fac79d7a8d08dd20b2a63d0fbfc758336e5d38c04044/diff",
	                "MergedDir": "/var/lib/docker/overlay2/45ca12b29ace88526ba1031080c77398d7e970e8d9cdc89fe714afcbb97fdb8d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/45ca12b29ace88526ba1031080c77398d7e970e8d9cdc89fe714afcbb97fdb8d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/45ca12b29ace88526ba1031080c77398d7e970e8d9cdc89fe714afcbb97fdb8d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-283677",
	                "Source": "/var/lib/docker/volumes/no-preload-283677/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-283677",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-283677",
	                "name.minikube.sigs.k8s.io": "no-preload-283677",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "8264dc33a3a532080f0ae6aff4eee6a056fb0dd7b5e521e3e96566788f2aa5ec",
	            "SandboxKey": "/var/run/docker/netns/8264dc33a3a5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-283677": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "31f43b80693175788eae574d1283c9772486f60a6f30b977a4f67f74c18220c7",
	                    "EndpointID": "a3ba68cfdc24803d972cef60a33a802c5d4b70629f0fcc30cdc85eae21f7a8d0",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "3a:fc:15:1d:5e:6a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-283677",
	                        "5be6667f0970"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-283677 -n no-preload-283677
E1115 10:36:06.759677   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/auto-931243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-283677 -n no-preload-283677: exit status 2 (383.057124ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-283677 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-283677 logs -n 25: (1.31882186s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-931243 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ ssh     │ -p bridge-931243 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo containerd config dump                                                                                                                                                                                                  │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo crio config                                                                                                                                                                                                             │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ delete  │ -p bridge-931243                                                                                                                                                                                                                              │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ delete  │ -p disable-driver-mounts-435527                                                                                                                                                                                                               │ disable-driver-mounts-435527 │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ start   │ -p default-k8s-diff-port-026691 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-026691 │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:36 UTC │
	│ addons  │ enable dashboard -p no-preload-283677 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ start   │ -p no-preload-283677 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:35 UTC │
	│ addons  │ enable metrics-server -p embed-certs-719574 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ stop    │ -p embed-certs-719574 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ image   │ old-k8s-version-087235 image list --format=json                                                                                                                                                                                               │ old-k8s-version-087235       │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ pause   │ -p old-k8s-version-087235 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-087235       │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ delete  │ -p old-k8s-version-087235                                                                                                                                                                                                                     │ old-k8s-version-087235       │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ addons  │ enable dashboard -p embed-certs-719574 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ start   │ -p embed-certs-719574 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ delete  │ -p old-k8s-version-087235                                                                                                                                                                                                                     │ old-k8s-version-087235       │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ start   │ -p newest-cni-086099 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ image   │ no-preload-283677 image list --format=json                                                                                                                                                                                                    │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ pause   │ -p no-preload-283677 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:35:51
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:35:51.880635  378695 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:35:51.880972  378695 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:35:51.880985  378695 out.go:374] Setting ErrFile to fd 2...
	I1115 10:35:51.880990  378695 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:35:51.881260  378695 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 10:35:51.881819  378695 out.go:368] Setting JSON to false
	I1115 10:35:51.883178  378695 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8289,"bootTime":1763194663,"procs":305,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 10:35:51.883287  378695 start.go:143] virtualization: kvm guest
	I1115 10:35:51.885121  378695 out.go:179] * [newest-cni-086099] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 10:35:51.886362  378695 notify.go:221] Checking for updates...
	I1115 10:35:51.886418  378695 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 10:35:51.887691  378695 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:35:51.888785  378695 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:35:51.889883  378695 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-55448/.minikube
	I1115 10:35:51.891041  378695 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 10:35:51.895496  378695 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:35:51.897243  378695 config.go:182] Loaded profile config "default-k8s-diff-port-026691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:51.897400  378695 config.go:182] Loaded profile config "embed-certs-719574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:51.897562  378695 config.go:182] Loaded profile config "no-preload-283677": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:51.897686  378695 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:35:51.923206  378695 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 10:35:51.923309  378695 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:35:51.980066  378695 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:true NGoroutines:77 SystemTime:2025-11-15 10:35:51.97030866 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be removed
by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:35:51.980169  378695 docker.go:319] overlay module found
	I1115 10:35:51.982196  378695 out.go:179] * Using the docker driver based on user configuration
	I1115 10:35:51.983355  378695 start.go:309] selected driver: docker
	I1115 10:35:51.983369  378695 start.go:930] validating driver "docker" against <nil>
	I1115 10:35:51.983380  378695 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:35:51.984213  378695 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:35:52.044923  378695 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:76 SystemTime:2025-11-15 10:35:52.034876039 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:35:52.045179  378695 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1115 10:35:52.045216  378695 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1115 10:35:52.045457  378695 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1115 10:35:52.047189  378695 out.go:179] * Using Docker driver with root privileges
	I1115 10:35:52.048407  378695 cni.go:84] Creating CNI manager for ""
	I1115 10:35:52.048473  378695 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:35:52.048484  378695 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 10:35:52.048535  378695 start.go:353] cluster config:
	{Name:newest-cni-086099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-086099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:35:52.049826  378695 out.go:179] * Starting "newest-cni-086099" primary control-plane node in "newest-cni-086099" cluster
	I1115 10:35:52.050909  378695 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:35:52.052056  378695 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:35:52.053065  378695 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:35:52.053098  378695 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1115 10:35:52.053116  378695 cache.go:65] Caching tarball of preloaded images
	I1115 10:35:52.053151  378695 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:35:52.053229  378695 preload.go:238] Found /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 10:35:52.053246  378695 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:35:52.053398  378695 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/config.json ...
	I1115 10:35:52.053424  378695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/config.json: {Name:mkf8d02e5e19217377f4420029b0cc1adccada68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:52.074755  378695 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:35:52.074774  378695 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:35:52.074789  378695 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:35:52.074816  378695 start.go:360] acquireMachinesLock for newest-cni-086099: {Name:mk9065475199777f18a95aabcc9dbfda12f72647 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:35:52.074909  378695 start.go:364] duration metric: took 76.491µs to acquireMachinesLock for "newest-cni-086099"
	I1115 10:35:52.074932  378695 start.go:93] Provisioning new machine with config: &{Name:newest-cni-086099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-086099 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:35:52.075027  378695 start.go:125] createHost starting for "" (driver="docker")
	W1115 10:35:48.630700  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:50.630784  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	I1115 10:35:51.131341  368849 pod_ready.go:94] pod "coredns-66bc5c9577-66nkj" is "Ready"
	I1115 10:35:51.131376  368849 pod_ready.go:86] duration metric: took 41.005975825s for pod "coredns-66bc5c9577-66nkj" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:51.134231  368849 pod_ready.go:83] waiting for pod "etcd-no-preload-283677" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:51.138317  368849 pod_ready.go:94] pod "etcd-no-preload-283677" is "Ready"
	I1115 10:35:51.138345  368849 pod_ready.go:86] duration metric: took 4.088368ms for pod "etcd-no-preload-283677" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:51.140317  368849 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-283677" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:51.143990  368849 pod_ready.go:94] pod "kube-apiserver-no-preload-283677" is "Ready"
	I1115 10:35:51.144012  368849 pod_ready.go:86] duration metric: took 3.672536ms for pod "kube-apiserver-no-preload-283677" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:51.145780  368849 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-283677" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:51.329880  368849 pod_ready.go:94] pod "kube-controller-manager-no-preload-283677" is "Ready"
	I1115 10:35:51.329907  368849 pod_ready.go:86] duration metric: took 184.110671ms for pod "kube-controller-manager-no-preload-283677" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:51.529891  368849 pod_ready.go:83] waiting for pod "kube-proxy-vjbxg" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:51.929529  368849 pod_ready.go:94] pod "kube-proxy-vjbxg" is "Ready"
	I1115 10:35:51.929559  368849 pod_ready.go:86] duration metric: took 399.636424ms for pod "kube-proxy-vjbxg" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 10:35:49.488114  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	W1115 10:35:51.988145  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	I1115 10:35:52.129598  368849 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-283677" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:52.529568  368849 pod_ready.go:94] pod "kube-scheduler-no-preload-283677" is "Ready"
	I1115 10:35:52.529597  368849 pod_ready.go:86] duration metric: took 399.970584ms for pod "kube-scheduler-no-preload-283677" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:52.529608  368849 pod_ready.go:40] duration metric: took 42.409442772s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:35:52.581745  368849 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:35:52.583831  368849 out.go:179] * Done! kubectl is now configured to use "no-preload-283677" cluster and "default" namespace by default
	I1115 10:35:49.830432  377744 out.go:252] * Restarting existing docker container for "embed-certs-719574" ...
	I1115 10:35:49.830517  377744 cli_runner.go:164] Run: docker start embed-certs-719574
	I1115 10:35:50.114791  377744 cli_runner.go:164] Run: docker container inspect embed-certs-719574 --format={{.State.Status}}
	I1115 10:35:50.134754  377744 kic.go:430] container "embed-certs-719574" state is running.
	I1115 10:35:50.135204  377744 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-719574
	I1115 10:35:50.154606  377744 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/embed-certs-719574/config.json ...
	I1115 10:35:50.154928  377744 machine.go:94] provisionDockerMachine start ...
	I1115 10:35:50.155043  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:50.174749  377744 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:50.175176  377744 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1115 10:35:50.175216  377744 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:35:50.176012  377744 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38764->127.0.0.1:33119: read: connection reset by peer
	I1115 10:35:53.310173  377744 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-719574
	
	I1115 10:35:53.310214  377744 ubuntu.go:182] provisioning hostname "embed-certs-719574"
	I1115 10:35:53.310354  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:53.329392  377744 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:53.329615  377744 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1115 10:35:53.329634  377744 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-719574 && echo "embed-certs-719574" | sudo tee /etc/hostname
	I1115 10:35:53.472294  377744 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-719574
	
	I1115 10:35:53.472411  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:53.492862  377744 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:53.493213  377744 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1115 10:35:53.493264  377744 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-719574' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-719574/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-719574' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:35:53.625059  377744 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:35:53.625092  377744 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-55448/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-55448/.minikube}
	I1115 10:35:53.625126  377744 ubuntu.go:190] setting up certificates
	I1115 10:35:53.625143  377744 provision.go:84] configureAuth start
	I1115 10:35:53.625244  377744 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-719574
	I1115 10:35:53.644516  377744 provision.go:143] copyHostCerts
	I1115 10:35:53.644586  377744 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem, removing ...
	I1115 10:35:53.644598  377744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem
	I1115 10:35:53.644672  377744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem (1082 bytes)
	I1115 10:35:53.644781  377744 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem, removing ...
	I1115 10:35:53.644790  377744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem
	I1115 10:35:53.644816  377744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem (1123 bytes)
	I1115 10:35:53.644891  377744 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem, removing ...
	I1115 10:35:53.644898  377744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem
	I1115 10:35:53.644921  377744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem (1679 bytes)
	I1115 10:35:53.645022  377744 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem org=jenkins.embed-certs-719574 san=[127.0.0.1 192.168.94.2 embed-certs-719574 localhost minikube]
	I1115 10:35:53.893496  377744 provision.go:177] copyRemoteCerts
	I1115 10:35:53.893597  377744 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:35:53.893653  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:53.913597  377744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/embed-certs-719574/id_rsa Username:docker}
	I1115 10:35:54.011809  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:35:54.029841  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1115 10:35:54.048781  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1115 10:35:54.067015  377744 provision.go:87] duration metric: took 441.854991ms to configureAuth
	I1115 10:35:54.067059  377744 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:35:54.067256  377744 config.go:182] Loaded profile config "embed-certs-719574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:54.067376  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:54.087249  377744 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:54.087454  377744 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1115 10:35:54.087469  377744 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:35:54.383177  377744 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:35:54.383205  377744 machine.go:97] duration metric: took 4.228252503s to provisionDockerMachine
	I1115 10:35:54.383221  377744 start.go:293] postStartSetup for "embed-certs-719574" (driver="docker")
	I1115 10:35:54.383246  377744 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:35:54.383323  377744 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:35:54.383389  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:54.402613  377744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/embed-certs-719574/id_rsa Username:docker}
	I1115 10:35:54.497991  377744 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:35:54.501812  377744 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:35:54.501845  377744 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:35:54.501859  377744 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/addons for local assets ...
	I1115 10:35:54.501927  377744 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/files for local assets ...
	I1115 10:35:54.502073  377744 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem -> 589622.pem in /etc/ssl/certs
	I1115 10:35:54.502192  377744 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:35:54.510401  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:35:54.528845  377744 start.go:296] duration metric: took 145.608503ms for postStartSetup
	I1115 10:35:54.528929  377744 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:35:54.529033  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:54.548704  377744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/embed-certs-719574/id_rsa Username:docker}
	I1115 10:35:52.076936  378695 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 10:35:52.077138  378695 start.go:159] libmachine.API.Create for "newest-cni-086099" (driver="docker")
	I1115 10:35:52.077166  378695 client.go:173] LocalClient.Create starting
	I1115 10:35:52.077242  378695 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem
	I1115 10:35:52.077273  378695 main.go:143] libmachine: Decoding PEM data...
	I1115 10:35:52.077289  378695 main.go:143] libmachine: Parsing certificate...
	I1115 10:35:52.077346  378695 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem
	I1115 10:35:52.077364  378695 main.go:143] libmachine: Decoding PEM data...
	I1115 10:35:52.077373  378695 main.go:143] libmachine: Parsing certificate...
	I1115 10:35:52.077693  378695 cli_runner.go:164] Run: docker network inspect newest-cni-086099 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 10:35:52.094513  378695 cli_runner.go:211] docker network inspect newest-cni-086099 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 10:35:52.094577  378695 network_create.go:284] running [docker network inspect newest-cni-086099] to gather additional debugging logs...
	I1115 10:35:52.094597  378695 cli_runner.go:164] Run: docker network inspect newest-cni-086099
	W1115 10:35:52.112168  378695 cli_runner.go:211] docker network inspect newest-cni-086099 returned with exit code 1
	I1115 10:35:52.112212  378695 network_create.go:287] error running [docker network inspect newest-cni-086099]: docker network inspect newest-cni-086099: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-086099 not found
	I1115 10:35:52.112227  378695 network_create.go:289] output of [docker network inspect newest-cni-086099]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-086099 not found
	
	** /stderr **
	I1115 10:35:52.112312  378695 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:35:52.130531  378695 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-77644897380e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:42:f9:e2:83:c3:91} reservation:<nil>}
	I1115 10:35:52.131072  378695 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e808d03b2052 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:03:24:ef:92:a1} reservation:<nil>}
	I1115 10:35:52.131784  378695 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3fb45eeaa66e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:1a:b3:b7:0f:2e:99} reservation:<nil>}
	I1115 10:35:52.132406  378695 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-31f43b806931 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:0a:cc:8c:d8:0d:c5} reservation:<nil>}
	I1115 10:35:52.133098  378695 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-a057ad05bea0 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:4e:4d:10:e4:db:cb} reservation:<nil>}
	I1115 10:35:52.133911  378695 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-5402d8c1e78a IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:0a:f0:66:0a:22:a5} reservation:<nil>}
	I1115 10:35:52.134802  378695 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f61840}
	I1115 10:35:52.134825  378695 network_create.go:124] attempt to create docker network newest-cni-086099 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1115 10:35:52.134865  378695 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-086099 newest-cni-086099
	I1115 10:35:52.184306  378695 network_create.go:108] docker network newest-cni-086099 192.168.103.0/24 created
	I1115 10:35:52.184341  378695 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-086099" container
	I1115 10:35:52.184418  378695 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 10:35:52.204038  378695 cli_runner.go:164] Run: docker volume create newest-cni-086099 --label name.minikube.sigs.k8s.io=newest-cni-086099 --label created_by.minikube.sigs.k8s.io=true
	I1115 10:35:52.223076  378695 oci.go:103] Successfully created a docker volume newest-cni-086099
	I1115 10:35:52.223154  378695 cli_runner.go:164] Run: docker run --rm --name newest-cni-086099-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-086099 --entrypoint /usr/bin/test -v newest-cni-086099:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 10:35:52.620626  378695 oci.go:107] Successfully prepared a docker volume newest-cni-086099
	I1115 10:35:52.620689  378695 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:35:52.620707  378695 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 10:35:52.620778  378695 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-086099:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1115 10:35:54.641677  377744 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:35:54.646490  377744 fix.go:56] duration metric: took 4.836578375s for fixHost
	I1115 10:35:54.646531  377744 start.go:83] releasing machines lock for "embed-certs-719574", held for 4.836643994s
	I1115 10:35:54.646605  377744 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-719574
	I1115 10:35:54.665925  377744 ssh_runner.go:195] Run: cat /version.json
	I1115 10:35:54.666009  377744 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:35:54.666054  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:54.666061  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:54.685752  377744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/embed-certs-719574/id_rsa Username:docker}
	I1115 10:35:54.686933  377744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/embed-certs-719574/id_rsa Username:docker}
	I1115 10:35:54.832262  377744 ssh_runner.go:195] Run: systemctl --version
	I1115 10:35:54.839294  377744 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:35:54.881869  377744 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:35:54.887543  377744 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:35:54.887616  377744 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:35:54.897470  377744 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 10:35:54.897495  377744 start.go:496] detecting cgroup driver to use...
	I1115 10:35:54.897526  377744 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:35:54.897575  377744 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:35:54.915183  377744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:35:54.936918  377744 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:35:54.937042  377744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:35:54.959514  377744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:35:54.974364  377744 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:35:55.064629  377744 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:35:55.149431  377744 docker.go:234] disabling docker service ...
	I1115 10:35:55.149491  377744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:35:55.164826  377744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:35:55.178539  377744 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:35:55.258146  377744 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:35:55.336854  377744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:35:55.350099  377744 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:35:55.371361  377744 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:35:55.371428  377744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:55.392170  377744 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:35:55.392226  377744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:55.402091  377744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:55.464259  377744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:55.527554  377744 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:35:55.536601  377744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:55.581816  377744 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:55.591398  377744 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:55.656666  377744 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:35:55.665181  377744 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:35:55.673411  377744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:55.753200  377744 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:35:57.278236  377744 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.524976792s)
	I1115 10:35:57.278272  377744 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:35:57.278324  377744 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:35:57.282657  377744 start.go:564] Will wait 60s for crictl version
	I1115 10:35:57.282733  377744 ssh_runner.go:195] Run: which crictl
	I1115 10:35:57.286574  377744 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:35:57.314817  377744 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:35:57.314911  377744 ssh_runner.go:195] Run: crio --version
	I1115 10:35:57.343990  377744 ssh_runner.go:195] Run: crio --version
	I1115 10:35:57.373426  377744 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1115 10:35:54.488332  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	W1115 10:35:56.987904  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	I1115 10:35:57.378513  377744 cli_runner.go:164] Run: docker network inspect embed-certs-719574 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:35:57.402028  377744 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1115 10:35:57.409345  377744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:35:57.420512  377744 kubeadm.go:884] updating cluster {Name:embed-certs-719574 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-719574 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:35:57.420680  377744 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:35:57.420740  377744 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:35:57.458228  377744 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:35:57.458259  377744 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:35:57.458316  377744 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:35:57.485027  377744 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:35:57.485050  377744 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:35:57.485058  377744 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1115 10:35:57.485169  377744 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-719574 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-719574 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:35:57.485252  377744 ssh_runner.go:195] Run: crio config
	I1115 10:35:57.536095  377744 cni.go:84] Creating CNI manager for ""
	I1115 10:35:57.536127  377744 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:35:57.536147  377744 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:35:57.536177  377744 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-719574 NodeName:embed-certs-719574 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:35:57.536329  377744 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-719574"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:35:57.536407  377744 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:35:57.544702  377744 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:35:57.544775  377744 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:35:57.554019  377744 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1115 10:35:57.569040  377744 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:35:57.585285  377744 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1115 10:35:57.600345  377744 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:35:57.604627  377744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:35:57.619569  377744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:57.710162  377744 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:35:57.731269  377744 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/embed-certs-719574 for IP: 192.168.94.2
	I1115 10:35:57.731297  377744 certs.go:195] generating shared ca certs ...
	I1115 10:35:57.731319  377744 certs.go:227] acquiring lock for ca certs: {Name:mkec8c0ab2b564f4fafe1f7fc86089efa994f6db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:57.731508  377744 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key
	I1115 10:35:57.731564  377744 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key
	I1115 10:35:57.731581  377744 certs.go:257] generating profile certs ...
	I1115 10:35:57.731700  377744 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/embed-certs-719574/client.key
	I1115 10:35:57.731784  377744 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/embed-certs-719574/apiserver.key.788254b7
	I1115 10:35:57.731906  377744 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/embed-certs-719574/proxy-client.key
	I1115 10:35:57.732110  377744 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem (1338 bytes)
	W1115 10:35:57.732161  377744 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962_empty.pem, impossibly tiny 0 bytes
	I1115 10:35:57.732182  377744 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:35:57.732220  377744 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:35:57.732263  377744 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:35:57.732297  377744 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem (1679 bytes)
	I1115 10:35:57.732354  377744 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:35:57.733199  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:35:57.753928  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:35:57.776212  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:35:57.798569  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:35:57.855574  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/embed-certs-719574/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1115 10:35:57.881192  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/embed-certs-719574/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 10:35:57.958309  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/embed-certs-719574/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:35:57.978725  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/embed-certs-719574/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:35:58.001721  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:35:58.020846  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem --> /usr/share/ca-certificates/58962.pem (1338 bytes)
	I1115 10:35:58.039367  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /usr/share/ca-certificates/589622.pem (1708 bytes)
	I1115 10:35:58.064830  377744 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:35:58.080795  377744 ssh_runner.go:195] Run: openssl version
	I1115 10:35:58.087121  377744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:35:58.095754  377744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:35:58.099496  377744 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:41 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:35:58.099554  377744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:35:58.135273  377744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:35:58.145763  377744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/58962.pem && ln -fs /usr/share/ca-certificates/58962.pem /etc/ssl/certs/58962.pem"
	I1115 10:35:58.156943  377744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/58962.pem
	I1115 10:35:58.161920  377744 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:47 /usr/share/ca-certificates/58962.pem
	I1115 10:35:58.162041  377744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/58962.pem
	I1115 10:35:58.206129  377744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/58962.pem /etc/ssl/certs/51391683.0"
	I1115 10:35:58.214420  377744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/589622.pem && ln -fs /usr/share/ca-certificates/589622.pem /etc/ssl/certs/589622.pem"
	I1115 10:35:58.223061  377744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/589622.pem
	I1115 10:35:58.226827  377744 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:47 /usr/share/ca-certificates/589622.pem
	I1115 10:35:58.226872  377744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/589622.pem
	I1115 10:35:58.268503  377744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/589622.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:35:58.278233  377744 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:35:58.282629  377744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 10:35:58.349655  377744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 10:35:58.454042  377744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 10:35:58.576363  377744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 10:35:58.746644  377744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 10:35:58.782106  377744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 10:35:58.871080  377744 kubeadm.go:401] StartCluster: {Name:embed-certs-719574 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-719574 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:35:58.871213  377744 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:35:58.871280  377744 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:35:58.960244  377744 cri.go:89] found id: "34a183c86eaa1af613ae4708887513840319366cbb4179e7f1678698297eade2"
	I1115 10:35:58.960271  377744 cri.go:89] found id: "a04037d02e2f108f2bc0bc86b223e37c201372e3009a0b55923609e9c3f5b7b5"
	I1115 10:35:58.960278  377744 cri.go:89] found id: "d2523d5b7384ac5c1a3b025b86aa00148daa68b776dfada0c3e9b0dd751d444c"
	I1115 10:35:58.960283  377744 cri.go:89] found id: "56627175c47b813bf4460ca13a901ec3152cbb4d22f0362e40133db8b19b3e87"
	I1115 10:35:58.960298  377744 cri.go:89] found id: ""
	I1115 10:35:58.960336  377744 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 10:35:58.974645  377744 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:35:58Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:35:58.974767  377744 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:35:59.046786  377744 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 10:35:59.046808  377744 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 10:35:59.046859  377744 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 10:35:59.056636  377744 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:35:59.057549  377744 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-719574" does not appear in /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:35:59.058047  377744 kubeconfig.go:62] /home/jenkins/minikube-integration/21894-55448/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-719574" cluster setting kubeconfig missing "embed-certs-719574" context setting]
	I1115 10:35:59.058858  377744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:59.060778  377744 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 10:35:59.069779  377744 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1115 10:35:59.069815  377744 kubeadm.go:602] duration metric: took 22.998235ms to restartPrimaryControlPlane
	I1115 10:35:59.069826  377744 kubeadm.go:403] duration metric: took 198.758279ms to StartCluster
	I1115 10:35:59.069846  377744 settings.go:142] acquiring lock: {Name:mk94cfac0f6eef2479181468a7a8082a6e5c8f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:59.069922  377744 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:35:59.071492  377744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:59.071756  377744 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:35:59.071888  377744 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:35:59.072018  377744 config.go:182] Loaded profile config "embed-certs-719574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:59.072030  377744 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-719574"
	I1115 10:35:59.072050  377744 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-719574"
	W1115 10:35:59.072059  377744 addons.go:248] addon storage-provisioner should already be in state true
	I1115 10:35:59.072081  377744 addons.go:70] Setting dashboard=true in profile "embed-certs-719574"
	I1115 10:35:59.072126  377744 addons.go:239] Setting addon dashboard=true in "embed-certs-719574"
	W1115 10:35:59.072141  377744 addons.go:248] addon dashboard should already be in state true
	I1115 10:35:59.072091  377744 host.go:66] Checking if "embed-certs-719574" exists ...
	I1115 10:35:59.072176  377744 host.go:66] Checking if "embed-certs-719574" exists ...
	I1115 10:35:59.072082  377744 addons.go:70] Setting default-storageclass=true in profile "embed-certs-719574"
	I1115 10:35:59.072227  377744 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-719574"
	I1115 10:35:59.072560  377744 cli_runner.go:164] Run: docker container inspect embed-certs-719574 --format={{.State.Status}}
	I1115 10:35:59.072736  377744 cli_runner.go:164] Run: docker container inspect embed-certs-719574 --format={{.State.Status}}
	I1115 10:35:59.072775  377744 cli_runner.go:164] Run: docker container inspect embed-certs-719574 --format={{.State.Status}}
	I1115 10:35:59.073400  377744 out.go:179] * Verifying Kubernetes components...
	I1115 10:35:59.074646  377744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:59.097674  377744 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1115 10:35:59.097741  377744 addons.go:239] Setting addon default-storageclass=true in "embed-certs-719574"
	W1115 10:35:59.097755  377744 addons.go:248] addon default-storageclass should already be in state true
	I1115 10:35:59.097682  377744 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:35:59.097790  377744 host.go:66] Checking if "embed-certs-719574" exists ...
	I1115 10:35:59.098261  377744 cli_runner.go:164] Run: docker container inspect embed-certs-719574 --format={{.State.Status}}
	I1115 10:35:59.098922  377744 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:35:59.098988  377744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:35:59.099040  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:59.103435  377744 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1115 10:35:59.104647  377744 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1115 10:35:59.104679  377744 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1115 10:35:59.104749  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:59.119302  377744 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:35:59.119331  377744 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:35:59.119398  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:59.120171  377744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/embed-certs-719574/id_rsa Username:docker}
	I1115 10:35:59.125098  377744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/embed-certs-719574/id_rsa Username:docker}
	I1115 10:35:59.137515  377744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/embed-certs-719574/id_rsa Username:docker}
	I1115 10:35:59.461029  377744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:35:59.461402  377744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:35:59.465397  377744 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1115 10:35:59.465421  377744 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1115 10:35:59.550018  377744 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:35:59.557165  377744 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1115 10:35:59.557200  377744 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1115 10:35:57.180648  378695 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-086099:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.559815955s)
	I1115 10:35:57.180688  378695 kic.go:203] duration metric: took 4.559978988s to extract preloaded images to volume ...
	W1115 10:35:57.180808  378695 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1115 10:35:57.180907  378695 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 10:35:57.245170  378695 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-086099 --name newest-cni-086099 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-086099 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-086099 --network newest-cni-086099 --ip 192.168.103.2 --volume newest-cni-086099:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 10:35:57.553341  378695 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Running}}
	I1115 10:35:57.574001  378695 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:35:57.595723  378695 cli_runner.go:164] Run: docker exec newest-cni-086099 stat /var/lib/dpkg/alternatives/iptables
	I1115 10:35:57.648675  378695 oci.go:144] the created container "newest-cni-086099" has a running status.
	I1115 10:35:57.648711  378695 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa...
	I1115 10:35:57.758503  378695 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 10:35:57.788103  378695 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:35:57.813502  378695 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 10:35:57.813525  378695 kic_runner.go:114] Args: [docker exec --privileged newest-cni-086099 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 10:35:57.866879  378695 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:35:57.892578  378695 machine.go:94] provisionDockerMachine start ...
	I1115 10:35:57.892683  378695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:35:57.916142  378695 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:57.916445  378695 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1115 10:35:57.916463  378695 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:35:57.917246  378695 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47936->127.0.0.1:33124: read: connection reset by peer
	I1115 10:36:01.055800  378695 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-086099
	
	I1115 10:36:01.055829  378695 ubuntu.go:182] provisioning hostname "newest-cni-086099"
	I1115 10:36:01.055909  378695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:01.077686  378695 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:01.078023  378695 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1115 10:36:01.078042  378695 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-086099 && echo "newest-cni-086099" | sudo tee /etc/hostname
	I1115 10:36:01.223717  378695 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-086099
	
	I1115 10:36:01.223807  378695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:01.242452  378695 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:01.242668  378695 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1115 10:36:01.242685  378695 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-086099' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-086099/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-086099' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:36:01.376856  378695 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:36:01.376893  378695 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-55448/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-55448/.minikube}
	I1115 10:36:01.376932  378695 ubuntu.go:190] setting up certificates
	I1115 10:36:01.376976  378695 provision.go:84] configureAuth start
	I1115 10:36:01.377048  378695 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-086099
	I1115 10:36:01.398840  378695 provision.go:143] copyHostCerts
	I1115 10:36:01.398983  378695 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem, removing ...
	I1115 10:36:01.399002  378695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem
	I1115 10:36:01.399077  378695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem (1082 bytes)
	I1115 10:36:01.399173  378695 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem, removing ...
	I1115 10:36:01.399183  378695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem
	I1115 10:36:01.399217  378695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem (1123 bytes)
	I1115 10:36:01.399290  378695 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem, removing ...
	I1115 10:36:01.399300  378695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem
	I1115 10:36:01.399336  378695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem (1679 bytes)
	I1115 10:36:01.399416  378695 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem org=jenkins.newest-cni-086099 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-086099]
	I1115 10:36:01.599358  378695 provision.go:177] copyRemoteCerts
	I1115 10:36:01.599429  378695 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:36:01.599467  378695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:01.617920  378695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:01.714257  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1115 10:36:01.736832  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1115 10:36:01.771414  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:36:01.789744  378695 provision.go:87] duration metric: took 412.746889ms to configureAuth
	I1115 10:36:01.789780  378695 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:36:01.790004  378695 config.go:182] Loaded profile config "newest-cni-086099": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:01.790111  378695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:01.807644  378695 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:01.807895  378695 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1115 10:36:01.807913  378695 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1115 10:35:59.487887  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	W1115 10:36:01.488245  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	I1115 10:36:01.988676  367608 node_ready.go:49] node "default-k8s-diff-port-026691" is "Ready"
	I1115 10:36:01.988712  367608 node_ready.go:38] duration metric: took 40.004362414s for node "default-k8s-diff-port-026691" to be "Ready" ...
	I1115 10:36:01.988728  367608 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:36:01.988785  367608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:36:02.002727  367608 api_server.go:72] duration metric: took 41.048135621s to wait for apiserver process to appear ...
	I1115 10:36:02.002761  367608 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:36:02.002786  367608 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1115 10:36:02.007061  367608 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1115 10:36:02.008035  367608 api_server.go:141] control plane version: v1.34.1
	I1115 10:36:02.008064  367608 api_server.go:131] duration metric: took 5.294787ms to wait for apiserver health ...
	I1115 10:36:02.008076  367608 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:36:02.011683  367608 system_pods.go:59] 8 kube-system pods found
	I1115 10:36:02.011713  367608 system_pods.go:61] "coredns-66bc5c9577-5q2j4" [e6c4ca54-e0fe-45ee-88a6-33bdccbb876c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:36:02.011719  367608 system_pods.go:61] "etcd-default-k8s-diff-port-026691" [f8228efa-91a3-41fb-9952-3b4063ddf162] Running
	I1115 10:36:02.011725  367608 system_pods.go:61] "kindnet-hjdrk" [9e1f7579-f5a2-44cd-b77f-71219cd8827d] Running
	I1115 10:36:02.011729  367608 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-026691" [9da86b3b-bfcd-4c40-ac86-ee9d9faeb9b0] Running
	I1115 10:36:02.011732  367608 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-026691" [23d9be87-1de1-4c38-b65e-b084cf0fed25] Running
	I1115 10:36:02.011737  367608 system_pods.go:61] "kube-proxy-c5bw5" [ee48d34b-ae60-4a03-a7bd-df76e089eebb] Running
	I1115 10:36:02.011741  367608 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-026691" [52b3711f-015c-4af6-8672-58642cbc0c16] Running
	I1115 10:36:02.011747  367608 system_pods.go:61] "storage-provisioner" [7dedf7a9-415d-4260-b225-7ca171744768] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:36:02.011757  367608 system_pods.go:74] duration metric: took 3.675183ms to wait for pod list to return data ...
	I1115 10:36:02.011767  367608 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:36:02.014095  367608 default_sa.go:45] found service account: "default"
	I1115 10:36:02.014113  367608 default_sa.go:55] duration metric: took 2.338136ms for default service account to be created ...
	I1115 10:36:02.014121  367608 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:36:02.016619  367608 system_pods.go:86] 8 kube-system pods found
	I1115 10:36:02.016644  367608 system_pods.go:89] "coredns-66bc5c9577-5q2j4" [e6c4ca54-e0fe-45ee-88a6-33bdccbb876c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:36:02.016650  367608 system_pods.go:89] "etcd-default-k8s-diff-port-026691" [f8228efa-91a3-41fb-9952-3b4063ddf162] Running
	I1115 10:36:02.016657  367608 system_pods.go:89] "kindnet-hjdrk" [9e1f7579-f5a2-44cd-b77f-71219cd8827d] Running
	I1115 10:36:02.016663  367608 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-026691" [9da86b3b-bfcd-4c40-ac86-ee9d9faeb9b0] Running
	I1115 10:36:02.016668  367608 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-026691" [23d9be87-1de1-4c38-b65e-b084cf0fed25] Running
	I1115 10:36:02.016676  367608 system_pods.go:89] "kube-proxy-c5bw5" [ee48d34b-ae60-4a03-a7bd-df76e089eebb] Running
	I1115 10:36:02.016681  367608 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-026691" [52b3711f-015c-4af6-8672-58642cbc0c16] Running
	I1115 10:36:02.016692  367608 system_pods.go:89] "storage-provisioner" [7dedf7a9-415d-4260-b225-7ca171744768] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:36:02.016714  367608 retry.go:31] will retry after 218.810216ms: missing components: kube-dns
	I1115 10:36:02.239606  367608 system_pods.go:86] 8 kube-system pods found
	I1115 10:36:02.239636  367608 system_pods.go:89] "coredns-66bc5c9577-5q2j4" [e6c4ca54-e0fe-45ee-88a6-33bdccbb876c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:36:02.239642  367608 system_pods.go:89] "etcd-default-k8s-diff-port-026691" [f8228efa-91a3-41fb-9952-3b4063ddf162] Running
	I1115 10:36:02.239648  367608 system_pods.go:89] "kindnet-hjdrk" [9e1f7579-f5a2-44cd-b77f-71219cd8827d] Running
	I1115 10:36:02.239654  367608 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-026691" [9da86b3b-bfcd-4c40-ac86-ee9d9faeb9b0] Running
	I1115 10:36:02.239657  367608 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-026691" [23d9be87-1de1-4c38-b65e-b084cf0fed25] Running
	I1115 10:36:02.239661  367608 system_pods.go:89] "kube-proxy-c5bw5" [ee48d34b-ae60-4a03-a7bd-df76e089eebb] Running
	I1115 10:36:02.239665  367608 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-026691" [52b3711f-015c-4af6-8672-58642cbc0c16] Running
	I1115 10:36:02.239671  367608 system_pods.go:89] "storage-provisioner" [7dedf7a9-415d-4260-b225-7ca171744768] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:36:02.239689  367608 retry.go:31] will retry after 377.391978ms: missing components: kube-dns
	I1115 10:35:59.653179  377744 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1115 10:35:59.653211  377744 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1115 10:35:59.670277  377744 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1115 10:35:59.670303  377744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1115 10:35:59.757741  377744 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1115 10:35:59.757796  377744 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1115 10:35:59.771666  377744 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1115 10:35:59.771696  377744 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1115 10:35:59.844282  377744 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1115 10:35:59.844312  377744 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1115 10:35:59.859695  377744 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1115 10:35:59.859723  377744 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1115 10:35:59.873202  377744 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:35:59.873227  377744 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1115 10:35:59.887124  377744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:36:03.675772  377744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.21470192s)
	I1115 10:36:03.675861  377744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.214437385s)
	I1115 10:36:03.675941  377744 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.125882332s)
	I1115 10:36:03.676037  377744 node_ready.go:35] waiting up to 6m0s for node "embed-certs-719574" to be "Ready" ...
	I1115 10:36:03.676084  377744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.788916637s)
	I1115 10:36:03.677758  377744 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-719574 addons enable metrics-server
	
	I1115 10:36:03.686848  377744 node_ready.go:49] node "embed-certs-719574" is "Ready"
	I1115 10:36:03.686872  377744 node_ready.go:38] duration metric: took 10.779527ms for node "embed-certs-719574" to be "Ready" ...
	I1115 10:36:03.686888  377744 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:36:03.686937  377744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:36:03.688770  377744 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1115 10:36:02.108071  378695 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:36:02.108099  378695 machine.go:97] duration metric: took 4.215497724s to provisionDockerMachine
	I1115 10:36:02.108110  378695 client.go:176] duration metric: took 10.030938427s to LocalClient.Create
	I1115 10:36:02.108130  378695 start.go:167] duration metric: took 10.030994703s to libmachine.API.Create "newest-cni-086099"
	I1115 10:36:02.108137  378695 start.go:293] postStartSetup for "newest-cni-086099" (driver="docker")
	I1115 10:36:02.108146  378695 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:36:02.108214  378695 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:36:02.108252  378695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:02.126898  378695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:02.234226  378695 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:36:02.237991  378695 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:36:02.238025  378695 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:36:02.238037  378695 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/addons for local assets ...
	I1115 10:36:02.238104  378695 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/files for local assets ...
	I1115 10:36:02.238204  378695 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem -> 589622.pem in /etc/ssl/certs
	I1115 10:36:02.238321  378695 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:36:02.249461  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:36:02.279024  378695 start.go:296] duration metric: took 170.869278ms for postStartSetup
	I1115 10:36:02.279408  378695 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-086099
	I1115 10:36:02.299580  378695 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/config.json ...
	I1115 10:36:02.299869  378695 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:36:02.299927  378695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:02.318249  378695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:02.419697  378695 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:36:02.424780  378695 start.go:128] duration metric: took 10.349732709s to createHost
	I1115 10:36:02.424816  378695 start.go:83] releasing machines lock for "newest-cni-086099", held for 10.349888861s
	I1115 10:36:02.424894  378695 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-086099
	I1115 10:36:02.442707  378695 ssh_runner.go:195] Run: cat /version.json
	I1115 10:36:02.442769  378695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:02.442774  378695 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:36:02.442838  378695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:02.475405  378695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:02.476482  378695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:02.627684  378695 ssh_runner.go:195] Run: systemctl --version
	I1115 10:36:02.635318  378695 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:36:02.690380  378695 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:36:02.695343  378695 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:36:02.695404  378695 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:36:02.723025  378695 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1115 10:36:02.723047  378695 start.go:496] detecting cgroup driver to use...
	I1115 10:36:02.723077  378695 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:36:02.723116  378695 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:36:02.740027  378695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:36:02.757082  378695 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:36:02.757147  378695 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:36:02.780790  378695 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:36:02.800005  378695 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:36:02.903918  378695 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:36:03.008676  378695 docker.go:234] disabling docker service ...
	I1115 10:36:03.008735  378695 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:36:03.029417  378695 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:36:03.042351  378695 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:36:03.141887  378695 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:36:03.242543  378695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:36:03.261558  378695 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:36:03.281222  378695 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:36:03.281289  378695 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:03.292850  378695 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:36:03.292913  378695 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:03.302308  378695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:03.312080  378695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:03.321520  378695 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:36:03.330371  378695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:03.339342  378695 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:03.358403  378695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:03.370875  378695 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:36:03.382720  378695 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:36:03.392373  378695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:03.490238  378695 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:36:03.612676  378695 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:36:03.612751  378695 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:36:03.616844  378695 start.go:564] Will wait 60s for crictl version
	I1115 10:36:03.616906  378695 ssh_runner.go:195] Run: which crictl
	I1115 10:36:03.620519  378695 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:36:03.647994  378695 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:36:03.648098  378695 ssh_runner.go:195] Run: crio --version
	I1115 10:36:03.681466  378695 ssh_runner.go:195] Run: crio --version
	I1115 10:36:03.715909  378695 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:36:03.717677  378695 cli_runner.go:164] Run: docker network inspect newest-cni-086099 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:36:03.737236  378695 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1115 10:36:03.741562  378695 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:36:03.754243  378695 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1115 10:36:02.621370  367608 system_pods.go:86] 8 kube-system pods found
	I1115 10:36:02.621401  367608 system_pods.go:89] "coredns-66bc5c9577-5q2j4" [e6c4ca54-e0fe-45ee-88a6-33bdccbb876c] Running
	I1115 10:36:02.621407  367608 system_pods.go:89] "etcd-default-k8s-diff-port-026691" [f8228efa-91a3-41fb-9952-3b4063ddf162] Running
	I1115 10:36:02.621412  367608 system_pods.go:89] "kindnet-hjdrk" [9e1f7579-f5a2-44cd-b77f-71219cd8827d] Running
	I1115 10:36:02.621416  367608 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-026691" [9da86b3b-bfcd-4c40-ac86-ee9d9faeb9b0] Running
	I1115 10:36:02.621421  367608 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-026691" [23d9be87-1de1-4c38-b65e-b084cf0fed25] Running
	I1115 10:36:02.621424  367608 system_pods.go:89] "kube-proxy-c5bw5" [ee48d34b-ae60-4a03-a7bd-df76e089eebb] Running
	I1115 10:36:02.621428  367608 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-026691" [52b3711f-015c-4af6-8672-58642cbc0c16] Running
	I1115 10:36:02.621431  367608 system_pods.go:89] "storage-provisioner" [7dedf7a9-415d-4260-b225-7ca171744768] Running
	I1115 10:36:02.621439  367608 system_pods.go:126] duration metric: took 607.311685ms to wait for k8s-apps to be running ...
	I1115 10:36:02.621445  367608 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:36:02.621494  367608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:36:02.636245  367608 system_svc.go:56] duration metric: took 14.790396ms WaitForService to wait for kubelet
	I1115 10:36:02.636277  367608 kubeadm.go:587] duration metric: took 41.681692299s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:36:02.636317  367608 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:36:02.639743  367608 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:36:02.639770  367608 node_conditions.go:123] node cpu capacity is 8
	I1115 10:36:02.639786  367608 node_conditions.go:105] duration metric: took 3.46192ms to run NodePressure ...
	I1115 10:36:02.639802  367608 start.go:242] waiting for startup goroutines ...
	I1115 10:36:02.639815  367608 start.go:247] waiting for cluster config update ...
	I1115 10:36:02.639834  367608 start.go:256] writing updated cluster config ...
	I1115 10:36:02.640167  367608 ssh_runner.go:195] Run: rm -f paused
	I1115 10:36:02.644506  367608 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:36:02.649994  367608 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5q2j4" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:02.656679  367608 pod_ready.go:94] pod "coredns-66bc5c9577-5q2j4" is "Ready"
	I1115 10:36:02.656844  367608 pod_ready.go:86] duration metric: took 6.756741ms for pod "coredns-66bc5c9577-5q2j4" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:02.659798  367608 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:02.665415  367608 pod_ready.go:94] pod "etcd-default-k8s-diff-port-026691" is "Ready"
	I1115 10:36:02.665516  367608 pod_ready.go:86] duration metric: took 5.656754ms for pod "etcd-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:02.669115  367608 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:02.675621  367608 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-026691" is "Ready"
	I1115 10:36:02.675649  367608 pod_ready.go:86] duration metric: took 6.472611ms for pod "kube-apiserver-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:02.678236  367608 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:03.050408  367608 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-026691" is "Ready"
	I1115 10:36:03.050447  367608 pod_ready.go:86] duration metric: took 372.139168ms for pod "kube-controller-manager-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:03.250079  367608 pod_ready.go:83] waiting for pod "kube-proxy-c5bw5" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:03.649856  367608 pod_ready.go:94] pod "kube-proxy-c5bw5" is "Ready"
	I1115 10:36:03.649889  367608 pod_ready.go:86] duration metric: took 399.777083ms for pod "kube-proxy-c5bw5" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:03.850318  367608 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:04.249888  367608 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-026691" is "Ready"
	I1115 10:36:04.249914  367608 pod_ready.go:86] duration metric: took 399.564892ms for pod "kube-scheduler-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:04.249926  367608 pod_ready.go:40] duration metric: took 1.605379763s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:36:04.304218  367608 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:36:04.306183  367608 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-026691" cluster and "default" namespace by default
	I1115 10:36:03.689851  377744 addons.go:515] duration metric: took 4.61797682s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1115 10:36:03.700992  377744 api_server.go:72] duration metric: took 4.62919911s to wait for apiserver process to appear ...
	I1115 10:36:03.701014  377744 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:36:03.701034  377744 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1115 10:36:03.705295  377744 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1115 10:36:03.706367  377744 api_server.go:141] control plane version: v1.34.1
	I1115 10:36:03.706398  377744 api_server.go:131] duration metric: took 5.374158ms to wait for apiserver health ...
	I1115 10:36:03.706409  377744 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:36:03.710047  377744 system_pods.go:59] 8 kube-system pods found
	I1115 10:36:03.710083  377744 system_pods.go:61] "coredns-66bc5c9577-fjzk5" [d4d185bc-88ec-4edb-b250-6a59ee426bf5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:36:03.710095  377744 system_pods.go:61] "etcd-embed-certs-719574" [293cb467-4f15-417b-944e-457c3ac7e56f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:36:03.710106  377744 system_pods.go:61] "kindnet-ql2r4" [224e2951-6c97-449d-8ff8-f72aa6d36d60] Running
	I1115 10:36:03.710122  377744 system_pods.go:61] "kube-apiserver-embed-certs-719574" [43a83385-040c-470f-8242-5066617ac8d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:36:03.710135  377744 system_pods.go:61] "kube-controller-manager-embed-certs-719574" [78f16cd1-774e-4556-9db6-bca54bc8214b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:36:03.710141  377744 system_pods.go:61] "kube-proxy-kmc8c" [3534c76c-4b99-4b84-ba00-21d0d49e770f] Running
	I1115 10:36:03.710147  377744 system_pods.go:61] "kube-scheduler-embed-certs-719574" [9b8d2d78-442d-48cc-88be-29b9eb7017e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:36:03.710158  377744 system_pods.go:61] "storage-provisioner" [39c3baf2-24de-475e-aeef-a10825991ca3] Running
	I1115 10:36:03.710165  377744 system_pods.go:74] duration metric: took 3.749108ms to wait for pod list to return data ...
	I1115 10:36:03.710174  377744 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:36:03.712493  377744 default_sa.go:45] found service account: "default"
	I1115 10:36:03.712513  377744 default_sa.go:55] duration metric: took 2.331314ms for default service account to be created ...
	I1115 10:36:03.712522  377744 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:36:03.715355  377744 system_pods.go:86] 8 kube-system pods found
	I1115 10:36:03.715378  377744 system_pods.go:89] "coredns-66bc5c9577-fjzk5" [d4d185bc-88ec-4edb-b250-6a59ee426bf5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:36:03.715386  377744 system_pods.go:89] "etcd-embed-certs-719574" [293cb467-4f15-417b-944e-457c3ac7e56f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:36:03.715391  377744 system_pods.go:89] "kindnet-ql2r4" [224e2951-6c97-449d-8ff8-f72aa6d36d60] Running
	I1115 10:36:03.715398  377744 system_pods.go:89] "kube-apiserver-embed-certs-719574" [43a83385-040c-470f-8242-5066617ac8d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:36:03.715405  377744 system_pods.go:89] "kube-controller-manager-embed-certs-719574" [78f16cd1-774e-4556-9db6-bca54bc8214b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:36:03.715412  377744 system_pods.go:89] "kube-proxy-kmc8c" [3534c76c-4b99-4b84-ba00-21d0d49e770f] Running
	I1115 10:36:03.715417  377744 system_pods.go:89] "kube-scheduler-embed-certs-719574" [9b8d2d78-442d-48cc-88be-29b9eb7017e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:36:03.715427  377744 system_pods.go:89] "storage-provisioner" [39c3baf2-24de-475e-aeef-a10825991ca3] Running
	I1115 10:36:03.715435  377744 system_pods.go:126] duration metric: took 2.908753ms to wait for k8s-apps to be running ...
	I1115 10:36:03.715443  377744 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:36:03.715482  377744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:36:03.729079  377744 system_svc.go:56] duration metric: took 13.624714ms WaitForService to wait for kubelet
	I1115 10:36:03.729108  377744 kubeadm.go:587] duration metric: took 4.657317817s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:36:03.729130  377744 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:36:03.732380  377744 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:36:03.732409  377744 node_conditions.go:123] node cpu capacity is 8
	I1115 10:36:03.732424  377744 node_conditions.go:105] duration metric: took 3.288836ms to run NodePressure ...
	I1115 10:36:03.732439  377744 start.go:242] waiting for startup goroutines ...
	I1115 10:36:03.732448  377744 start.go:247] waiting for cluster config update ...
	I1115 10:36:03.732463  377744 start.go:256] writing updated cluster config ...
	I1115 10:36:03.732754  377744 ssh_runner.go:195] Run: rm -f paused
	I1115 10:36:03.737164  377744 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:36:03.740586  377744 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fjzk5" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:03.755299  378695 kubeadm.go:884] updating cluster {Name:newest-cni-086099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-086099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:36:03.755432  378695 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:36:03.755482  378695 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:36:03.794722  378695 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:36:03.794749  378695 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:36:03.794805  378695 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:36:03.826109  378695 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:36:03.826142  378695 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:36:03.826153  378695 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1115 10:36:03.826264  378695 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-086099 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-086099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:36:03.826354  378695 ssh_runner.go:195] Run: crio config
	I1115 10:36:03.879671  378695 cni.go:84] Creating CNI manager for ""
	I1115 10:36:03.879701  378695 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:36:03.879717  378695 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1115 10:36:03.879739  378695 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-086099 NodeName:newest-cni-086099 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:36:03.879883  378695 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-086099"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:36:03.879988  378695 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:36:03.888992  378695 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:36:03.889052  378695 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:36:03.897294  378695 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1115 10:36:03.911151  378695 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:36:03.930297  378695 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1115 10:36:03.945072  378695 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:36:03.948706  378695 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:36:03.959243  378695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:04.058938  378695 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:36:04.093857  378695 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099 for IP: 192.168.103.2
	I1115 10:36:04.093888  378695 certs.go:195] generating shared ca certs ...
	I1115 10:36:04.093909  378695 certs.go:227] acquiring lock for ca certs: {Name:mkec8c0ab2b564f4fafe1f7fc86089efa994f6db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:04.094076  378695 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key
	I1115 10:36:04.094148  378695 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key
	I1115 10:36:04.094163  378695 certs.go:257] generating profile certs ...
	I1115 10:36:04.094230  378695 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/client.key
	I1115 10:36:04.094258  378695 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/client.crt with IP's: []
	I1115 10:36:04.385453  378695 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/client.crt ...
	I1115 10:36:04.385478  378695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/client.crt: {Name:mk40f6a053043aca087e720d3a4da44f4215e456 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:04.385623  378695 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/client.key ...
	I1115 10:36:04.385633  378695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/client.key: {Name:mk7ba7a9aed87498b12d0ea82f1fd16a2802adbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:04.385729  378695 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.key.d719cdad
	I1115 10:36:04.385749  378695 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.crt.d719cdad with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1115 10:36:04.782829  378695 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.crt.d719cdad ...
	I1115 10:36:04.782863  378695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.crt.d719cdad: {Name:mkcdec4fb6d5949c6190ac10a0f9caeb369ef1df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:04.783103  378695 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.key.d719cdad ...
	I1115 10:36:04.783129  378695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.key.d719cdad: {Name:mk74203e2c301a3a488fc95324a401039fa8106d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:04.783253  378695 certs.go:382] copying /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.crt.d719cdad -> /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.crt
	I1115 10:36:04.783373  378695 certs.go:386] copying /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.key.d719cdad -> /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.key
	I1115 10:36:04.783463  378695 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.key
	I1115 10:36:04.783486  378695 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.crt with IP's: []
	I1115 10:36:04.900301  378695 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.crt ...
	I1115 10:36:04.900329  378695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.crt: {Name:mk0d5b4842614d84db6a4d32b9e40b0ee2961026 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:04.900527  378695 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.key ...
	I1115 10:36:04.900547  378695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.key: {Name:mkc0cf01fd3204cf2eb33c45d49bdb1a3af7d389 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:04.900769  378695 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem (1338 bytes)
	W1115 10:36:04.900806  378695 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962_empty.pem, impossibly tiny 0 bytes
	I1115 10:36:04.900817  378695 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:36:04.900837  378695 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:36:04.900863  378695 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:36:04.900884  378695 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem (1679 bytes)
	I1115 10:36:04.900931  378695 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:36:04.901498  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:36:04.920490  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:36:04.938524  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:36:04.956167  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:36:04.974935  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1115 10:36:04.995270  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 10:36:05.016110  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:36:05.034440  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:36:05.051948  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem --> /usr/share/ca-certificates/58962.pem (1338 bytes)
	I1115 10:36:05.071136  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /usr/share/ca-certificates/589622.pem (1708 bytes)
	I1115 10:36:05.100067  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:36:05.120144  378695 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:36:05.133751  378695 ssh_runner.go:195] Run: openssl version
	I1115 10:36:05.140442  378695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:36:05.150520  378695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:05.155339  378695 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:41 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:05.155411  378695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:05.205520  378695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:36:05.214306  378695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/58962.pem && ln -fs /usr/share/ca-certificates/58962.pem /etc/ssl/certs/58962.pem"
	I1115 10:36:05.222589  378695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/58962.pem
	I1115 10:36:05.226661  378695 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:47 /usr/share/ca-certificates/58962.pem
	I1115 10:36:05.226723  378695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/58962.pem
	I1115 10:36:05.269094  378695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/58962.pem /etc/ssl/certs/51391683.0"
	I1115 10:36:05.282750  378695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/589622.pem && ln -fs /usr/share/ca-certificates/589622.pem /etc/ssl/certs/589622.pem"
	I1115 10:36:05.291785  378695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/589622.pem
	I1115 10:36:05.295742  378695 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:47 /usr/share/ca-certificates/589622.pem
	I1115 10:36:05.295801  378695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/589622.pem
	I1115 10:36:05.341059  378695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/589622.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:36:05.352931  378695 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:36:05.357729  378695 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 10:36:05.357794  378695 kubeadm.go:401] StartCluster: {Name:newest-cni-086099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-086099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:36:05.357898  378695 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:36:05.358038  378695 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:36:05.389342  378695 cri.go:89] found id: ""
	I1115 10:36:05.389409  378695 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:36:05.399176  378695 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 10:36:05.407568  378695 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 10:36:05.407619  378695 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 10:36:05.415732  378695 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 10:36:05.415750  378695 kubeadm.go:158] found existing configuration files:
	
	I1115 10:36:05.415789  378695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 10:36:05.423933  378695 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 10:36:05.424003  378695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 10:36:05.431425  378695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 10:36:05.439333  378695 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 10:36:05.439396  378695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 10:36:05.446777  378695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 10:36:05.454437  378695 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 10:36:05.454481  378695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 10:36:05.461644  378695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 10:36:05.468875  378695 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 10:36:05.468937  378695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 10:36:05.476821  378695 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 10:36:05.516431  378695 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 10:36:05.516536  378695 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 10:36:05.536153  378695 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 10:36:05.536251  378695 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1115 10:36:05.536322  378695 kubeadm.go:319] OS: Linux
	I1115 10:36:05.536373  378695 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 10:36:05.536430  378695 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1115 10:36:05.536519  378695 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 10:36:05.536598  378695 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 10:36:05.536682  378695 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 10:36:05.536769  378695 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 10:36:05.536832  378695 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 10:36:05.536877  378695 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 10:36:05.536920  378695 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1115 10:36:05.598690  378695 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 10:36:05.598871  378695 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 10:36:05.599041  378695 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 10:36:05.606076  378695 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1115 10:36:05.608588  378695 out.go:252]   - Generating certificates and keys ...
	I1115 10:36:05.608685  378695 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 10:36:05.608773  378695 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 10:36:06.648403  378695 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 10:36:06.817549  378695 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	
	
	==> CRI-O <==
	Nov 15 10:35:39 no-preload-283677 conmon[1240]: conmon 2ed35452acbea6332ff4 <ninfo>: container 1242 exited with status 1
	Nov 15 10:35:40 no-preload-283677 crio[676]: time="2025-11-15T10:35:40.157502626Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=df1e6dd1-7318-4c8b-91bc-5ffa9cf64224 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:35:40 no-preload-283677 crio[676]: time="2025-11-15T10:35:40.158502996Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=86b1da1c-cb4a-4ffe-8488-9d2d62f4f127 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:35:40 no-preload-283677 crio[676]: time="2025-11-15T10:35:40.159615726Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=16fb2d70-5a33-420e-a0b6-2b420fe2dec8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:35:40 no-preload-283677 crio[676]: time="2025-11-15T10:35:40.159755894Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:35:40 no-preload-283677 crio[676]: time="2025-11-15T10:35:40.166297094Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:35:40 no-preload-283677 crio[676]: time="2025-11-15T10:35:40.166487289Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/328acd9dd634cca1ed733c1b4af1466bc7c6b10d95e2574f93fc6d7dcaaf8618/merged/etc/passwd: no such file or directory"
	Nov 15 10:35:40 no-preload-283677 crio[676]: time="2025-11-15T10:35:40.166527059Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/328acd9dd634cca1ed733c1b4af1466bc7c6b10d95e2574f93fc6d7dcaaf8618/merged/etc/group: no such file or directory"
	Nov 15 10:35:40 no-preload-283677 crio[676]: time="2025-11-15T10:35:40.166833868Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:35:40 no-preload-283677 crio[676]: time="2025-11-15T10:35:40.191673371Z" level=info msg="Created container 8bd2c710a2c580a8988b5b3071d9f3587a5b7a4d80023277e15840a6b9f2c4fa: kube-system/storage-provisioner/storage-provisioner" id=16fb2d70-5a33-420e-a0b6-2b420fe2dec8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:35:40 no-preload-283677 crio[676]: time="2025-11-15T10:35:40.192406472Z" level=info msg="Starting container: 8bd2c710a2c580a8988b5b3071d9f3587a5b7a4d80023277e15840a6b9f2c4fa" id=84a58085-b82d-4069-b967-66203fe35312 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:35:40 no-preload-283677 crio[676]: time="2025-11-15T10:35:40.194226185Z" level=info msg="Started container" PID=1853 containerID=8bd2c710a2c580a8988b5b3071d9f3587a5b7a4d80023277e15840a6b9f2c4fa description=kube-system/storage-provisioner/storage-provisioner id=84a58085-b82d-4069-b967-66203fe35312 name=/runtime.v1.RuntimeService/StartContainer sandboxID=13b8bae9216512b5bf4758ca3a1dfaa68cca71d6c3811941f471827761cc754a
	Nov 15 10:36:01 no-preload-283677 crio[676]: time="2025-11-15T10:36:01.896877596Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=157d570e-5001-43a2-84fa-58861c49160c name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:36:01 no-preload-283677 crio[676]: time="2025-11-15T10:36:01.898066116Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=780c95a1-a0d0-4d69-b9ca-08903fb67ee4 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:36:01 no-preload-283677 crio[676]: time="2025-11-15T10:36:01.899156108Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2g5rq/dashboard-metrics-scraper" id=5550b067-6351-45f6-b925-c6e6f82dd105 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:36:01 no-preload-283677 crio[676]: time="2025-11-15T10:36:01.89929904Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:01 no-preload-283677 crio[676]: time="2025-11-15T10:36:01.907302702Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:01 no-preload-283677 crio[676]: time="2025-11-15T10:36:01.9079356Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:01 no-preload-283677 crio[676]: time="2025-11-15T10:36:01.926226277Z" level=info msg="Created container 2d20d5dc1c2b66cabb2156b9f8d8669dcb25cbd73f94552cea3da087e47a37c2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2g5rq/dashboard-metrics-scraper" id=5550b067-6351-45f6-b925-c6e6f82dd105 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:36:01 no-preload-283677 crio[676]: time="2025-11-15T10:36:01.927181675Z" level=info msg="Starting container: 2d20d5dc1c2b66cabb2156b9f8d8669dcb25cbd73f94552cea3da087e47a37c2" id=8b0f5df2-1489-418e-82fd-d84cbfc35fcc name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:36:01 no-preload-283677 crio[676]: time="2025-11-15T10:36:01.929651368Z" level=info msg="Started container" PID=1890 containerID=2d20d5dc1c2b66cabb2156b9f8d8669dcb25cbd73f94552cea3da087e47a37c2 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2g5rq/dashboard-metrics-scraper id=8b0f5df2-1489-418e-82fd-d84cbfc35fcc name=/runtime.v1.RuntimeService/StartContainer sandboxID=8f018dcce94ab2eb56b37b5ecd10329c9069076fb33aafd7641bcecfc92ae8ae
	Nov 15 10:36:01 no-preload-283677 conmon[1888]: conmon 2d20d5dc1c2b66cabb21 <ninfo>: container 1890 exited with status 1
	Nov 15 10:36:02 no-preload-283677 crio[676]: time="2025-11-15T10:36:02.216570162Z" level=info msg="Removing container: 8328e5d7622543edd42c5075bda24e795b23b86ebfab1de17757659a27b5e8dd" id=8da1d56f-d4b0-4f97-976e-aee2890deff7 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:36:02 no-preload-283677 crio[676]: time="2025-11-15T10:36:02.222726231Z" level=info msg="Error loading conmon cgroup of container 8328e5d7622543edd42c5075bda24e795b23b86ebfab1de17757659a27b5e8dd: cgroup deleted" id=8da1d56f-d4b0-4f97-976e-aee2890deff7 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:36:02 no-preload-283677 crio[676]: time="2025-11-15T10:36:02.226421848Z" level=info msg="Removed container 8328e5d7622543edd42c5075bda24e795b23b86ebfab1de17757659a27b5e8dd: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2g5rq/dashboard-metrics-scraper" id=8da1d56f-d4b0-4f97-976e-aee2890deff7 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	2d20d5dc1c2b6       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           5 seconds ago        Exited              dashboard-metrics-scraper   3                   8f018dcce94ab       dashboard-metrics-scraper-6ffb444bf9-2g5rq   kubernetes-dashboard
	8bd2c710a2c58       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           27 seconds ago       Running             storage-provisioner         2                   13b8bae921651       storage-provisioner                          kube-system
	72e788657e34c       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago       Running             kubernetes-dashboard        0                   193e8217312fd       kubernetes-dashboard-855c9754f9-2q95v        kubernetes-dashboard
	a1b1db57d4972       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           58 seconds ago       Running             coredns                     1                   2fa3b9c8ec41c       coredns-66bc5c9577-66nkj                     kube-system
	122754c749135       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           58 seconds ago       Running             busybox                     1                   1be0704e3445b       busybox                                      default
	e19aa2c491434       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           58 seconds ago       Running             kindnet-cni                 1                   28c885731f594       kindnet-x5rwg                                kube-system
	2ed35452acbea       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           58 seconds ago       Exited              storage-provisioner         1                   13b8bae921651       storage-provisioner                          kube-system
	fbd534126f75a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           58 seconds ago       Running             kube-proxy                  1                   4bf0ad2e7057d       kube-proxy-vjbxg                             kube-system
	324a3ff1cd89d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           About a minute ago   Running             etcd                        1                   fe46357c0d663       etcd-no-preload-283677                       kube-system
	8c532dc6e6980       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           About a minute ago   Running             kube-scheduler              1                   d0390893cc6f9       kube-scheduler-no-preload-283677             kube-system
	ac246fc71f81d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           About a minute ago   Running             kube-controller-manager     1                   69bfb06fee1e6       kube-controller-manager-no-preload-283677    kube-system
	c26ba954b1e2f       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           About a minute ago   Running             kube-apiserver              1                   4de9b34c6c186       kube-apiserver-no-preload-283677             kube-system
	
	
	==> coredns [a1b1db57d497261f854972caaaabfb2ff94437f156ebd9a824ae6eec9b4717be] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59996 - 52847 "HINFO IN 2498217211002336889.1691539149669243410. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.058792624s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               no-preload-283677
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-283677
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=no-preload-283677
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_34_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:34:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-283677
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:35:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:35:38 +0000   Sat, 15 Nov 2025 10:34:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:35:38 +0000   Sat, 15 Nov 2025 10:34:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:35:38 +0000   Sat, 15 Nov 2025 10:34:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:35:38 +0000   Sat, 15 Nov 2025 10:34:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-283677
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                24a4b1bc-3dc5-430d-9221-78b09868633f
	  Boot ID:                    6a33f504-3a8c-4d49-8fc5-301cf5275efc
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-66nkj                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     114s
	  kube-system                 etcd-no-preload-283677                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         119s
	  kube-system                 kindnet-x5rwg                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      114s
	  kube-system                 kube-apiserver-no-preload-283677              250m (3%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-no-preload-283677     200m (2%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-proxy-vjbxg                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-scheduler-no-preload-283677              100m (1%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-2g5rq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-2q95v         0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 113s               kube-proxy       
	  Normal   Starting                 58s                kube-proxy       
	  Normal   Starting                 2m                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     119s               kubelet          Node no-preload-283677 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    119s               kubelet          Node no-preload-283677 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  119s               kubelet          Node no-preload-283677 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           115s               node-controller  Node no-preload-283677 event: Registered Node no-preload-283677 in Controller
	  Normal   NodeReady                99s                kubelet          Node no-preload-283677 status is now: NodeReady
	  Normal   Starting                 64s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 64s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  63s (x8 over 64s)  kubelet          Node no-preload-283677 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s (x8 over 64s)  kubelet          Node no-preload-283677 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s (x8 over 64s)  kubelet          Node no-preload-283677 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           55s                node-controller  Node no-preload-283677 event: Registered Node no-preload-283677 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 7f 53 f9 15 93 08 06
	[  +0.017821] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff c6 66 25 e9 45 28 08 06
	[ +19.178294] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 a3 65 b9 28 15 08 06
	[  +0.347929] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 6d df 33 a5 ad 08 06
	[  +0.000319] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff a2 7f 53 f9 15 93 08 06
	[Nov15 10:33] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 96 ed 10 e8 9a 80 08 06
	[  +0.000481] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 a3 65 b9 28 15 08 06
	[ +35.214217] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 c2 9a 56 52 0c 08 06
	[  +9.104720] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 c2 c4 d1 7c dd 08 06
	[Nov15 10:34] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 68 1e 80 53 05 08 06
	[  +0.000444] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 c2 c4 d1 7c dd 08 06
	[ +18.836046] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 98 db 78 4f 0b 08 06
	[  +0.000708] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 c2 9a 56 52 0c 08 06
	
	
	==> etcd [324a3ff1cd89d80c17c217faf6db6f2b6c9f52f5abe13f2e83485e1e03b0c7aa] <==
	{"level":"warn","ts":"2025-11-15T10:35:06.916165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.924367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.984709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.991644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.999466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:07.006497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:07.015822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:07.025336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:07.033723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:07.073199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:07.081654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:07.089447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:07.096797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:07.105026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:07.112188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:07.120165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:07.131304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:07.140469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:07.172470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:07.187866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:07.199156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:07.208138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:07.215553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:07.223201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:07.293525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50122","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:36:08 up  2:18,  0 user,  load average: 3.95, 4.37, 2.78
	Linux no-preload-283677 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e19aa2c4914343607f446514b29eff501e18401aa8e8ae99efee7a13e1b84831] <==
	I1115 10:35:09.745225       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:35:09.745587       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1115 10:35:09.745752       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:35:09.745768       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:35:09.745790       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:35:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:35:10.047282       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:35:10.047313       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:35:10.047327       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:35:10.047685       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1115 10:35:10.447809       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:35:10.447846       1 metrics.go:72] Registering metrics
	I1115 10:35:10.447936       1 controller.go:711] "Syncing nftables rules"
	I1115 10:35:20.046533       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 10:35:20.046589       1 main.go:301] handling current node
	I1115 10:35:30.047226       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 10:35:30.047276       1 main.go:301] handling current node
	I1115 10:35:40.047202       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 10:35:40.047252       1 main.go:301] handling current node
	I1115 10:35:50.054066       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 10:35:50.054124       1 main.go:301] handling current node
	I1115 10:36:00.054044       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 10:36:00.054076       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c26ba954b1e2fc1ea4ef12fc0801c0a31171b23e67cf48fdeb9207cbdb3ba3b0] <==
	I1115 10:35:07.995590       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1115 10:35:08.001492       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:35:08.067754       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1115 10:35:08.067913       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1115 10:35:08.068111       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1115 10:35:08.068132       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1115 10:35:08.068326       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1115 10:35:08.068907       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1115 10:35:08.068940       1 policy_source.go:240] refreshing policies
	I1115 10:35:08.069381       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 10:35:08.070105       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1115 10:35:08.072427       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1115 10:35:08.076758       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1115 10:35:08.076776       1 cache.go:39] Caches are synced for autoregister controller
	I1115 10:35:08.835038       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 10:35:08.861494       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:35:08.909236       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 10:35:09.069371       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 10:35:09.070226       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:35:09.103179       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:35:09.309601       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.247.49"}
	I1115 10:35:09.523802       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.82.231"}
	I1115 10:35:12.435281       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:35:12.683882       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 10:35:12.886280       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [ac246fc71f81d0fb5e3cd430730c903f2ca388376feb3a8dc321eb565aa6c5ee] <==
	I1115 10:35:12.270603       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1115 10:35:12.270681       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1115 10:35:12.270699       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 10:35:12.278317       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1115 10:35:12.278348       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1115 10:35:12.278354       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1115 10:35:12.278496       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1115 10:35:12.278528       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1115 10:35:12.279683       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1115 10:35:12.279762       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1115 10:35:12.279996       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1115 10:35:12.281971       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1115 10:35:12.283164       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1115 10:35:12.285442       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1115 10:35:12.286743       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:35:12.286752       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 10:35:12.288982       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1115 10:35:12.289071       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1115 10:35:12.289158       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-283677"
	I1115 10:35:12.289223       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1115 10:35:12.291368       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1115 10:35:12.316278       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:35:12.329516       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:35:12.329642       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 10:35:12.329658       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [fbd534126f75ad8fd1d5fdcbd5ef4977e3b134a0b5f0bb5ef906b59631045d73] <==
	I1115 10:35:09.586327       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:35:09.661280       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:35:09.762859       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:35:09.762900       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1115 10:35:09.763082       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:35:09.795648       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:35:09.795731       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:35:09.811610       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:35:09.819601       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:35:09.820070       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:35:09.824207       1 config.go:200] "Starting service config controller"
	I1115 10:35:09.824352       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:35:09.824460       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:35:09.824718       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:35:09.824788       1 config.go:309] "Starting node config controller"
	I1115 10:35:09.824819       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:35:09.825681       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:35:09.826579       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:35:09.826612       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:35:09.925896       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 10:35:09.930872       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 10:35:09.927002       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [8c532dc6e69808afe1a0ffb587f828c0b3f6fa37c71b2fbc5ce4abdafdedf008] <==
	I1115 10:35:05.573329       1 serving.go:386] Generated self-signed cert in-memory
	W1115 10:35:07.983757       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1115 10:35:07.983856       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	W1115 10:35:07.983889       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1115 10:35:07.983928       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1115 10:35:08.080459       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1115 10:35:08.080488       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:35:08.083406       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:35:08.083483       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:35:08.084594       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 10:35:08.084689       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 10:35:08.184415       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 10:35:17 no-preload-283677 kubelet[821]: I1115 10:35:17.089999     821 scope.go:117] "RemoveContainer" containerID="983ac1cf399ecf93330e2267f7ddf4d73213d8ac7cd14b1e9f060882ae9c8c7e"
	Nov 15 10:35:18 no-preload-283677 kubelet[821]: I1115 10:35:18.094375     821 scope.go:117] "RemoveContainer" containerID="983ac1cf399ecf93330e2267f7ddf4d73213d8ac7cd14b1e9f060882ae9c8c7e"
	Nov 15 10:35:18 no-preload-283677 kubelet[821]: I1115 10:35:18.094517     821 scope.go:117] "RemoveContainer" containerID="315f43f0dc2929afbeb2abbedd88bd1f6ac2617078c4a55ed7654db76deb9986"
	Nov 15 10:35:18 no-preload-283677 kubelet[821]: E1115 10:35:18.094697     821 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2g5rq_kubernetes-dashboard(0b38b832-e405-4967-9a32-6a627d9c19d2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2g5rq" podUID="0b38b832-e405-4967-9a32-6a627d9c19d2"
	Nov 15 10:35:19 no-preload-283677 kubelet[821]: I1115 10:35:19.098737     821 scope.go:117] "RemoveContainer" containerID="315f43f0dc2929afbeb2abbedd88bd1f6ac2617078c4a55ed7654db76deb9986"
	Nov 15 10:35:19 no-preload-283677 kubelet[821]: E1115 10:35:19.098918     821 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2g5rq_kubernetes-dashboard(0b38b832-e405-4967-9a32-6a627d9c19d2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2g5rq" podUID="0b38b832-e405-4967-9a32-6a627d9c19d2"
	Nov 15 10:35:20 no-preload-283677 kubelet[821]: I1115 10:35:20.101472     821 scope.go:117] "RemoveContainer" containerID="315f43f0dc2929afbeb2abbedd88bd1f6ac2617078c4a55ed7654db76deb9986"
	Nov 15 10:35:20 no-preload-283677 kubelet[821]: E1115 10:35:20.101760     821 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2g5rq_kubernetes-dashboard(0b38b832-e405-4967-9a32-6a627d9c19d2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2g5rq" podUID="0b38b832-e405-4967-9a32-6a627d9c19d2"
	Nov 15 10:35:23 no-preload-283677 kubelet[821]: I1115 10:35:23.119784     821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2q95v" podStartSLOduration=2.011255788 podStartE2EDuration="11.119762329s" podCreationTimestamp="2025-11-15 10:35:12 +0000 UTC" firstStartedPulling="2025-11-15 10:35:13.192334118 +0000 UTC m=+9.403773310" lastFinishedPulling="2025-11-15 10:35:22.300840657 +0000 UTC m=+18.512279851" observedRunningTime="2025-11-15 10:35:23.119686143 +0000 UTC m=+19.331125354" watchObservedRunningTime="2025-11-15 10:35:23.119762329 +0000 UTC m=+19.331201542"
	Nov 15 10:35:32 no-preload-283677 kubelet[821]: I1115 10:35:32.896043     821 scope.go:117] "RemoveContainer" containerID="315f43f0dc2929afbeb2abbedd88bd1f6ac2617078c4a55ed7654db76deb9986"
	Nov 15 10:35:33 no-preload-283677 kubelet[821]: I1115 10:35:33.136713     821 scope.go:117] "RemoveContainer" containerID="315f43f0dc2929afbeb2abbedd88bd1f6ac2617078c4a55ed7654db76deb9986"
	Nov 15 10:35:33 no-preload-283677 kubelet[821]: I1115 10:35:33.136938     821 scope.go:117] "RemoveContainer" containerID="8328e5d7622543edd42c5075bda24e795b23b86ebfab1de17757659a27b5e8dd"
	Nov 15 10:35:33 no-preload-283677 kubelet[821]: E1115 10:35:33.137163     821 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2g5rq_kubernetes-dashboard(0b38b832-e405-4967-9a32-6a627d9c19d2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2g5rq" podUID="0b38b832-e405-4967-9a32-6a627d9c19d2"
	Nov 15 10:35:38 no-preload-283677 kubelet[821]: I1115 10:35:38.618739     821 scope.go:117] "RemoveContainer" containerID="8328e5d7622543edd42c5075bda24e795b23b86ebfab1de17757659a27b5e8dd"
	Nov 15 10:35:38 no-preload-283677 kubelet[821]: E1115 10:35:38.618949     821 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2g5rq_kubernetes-dashboard(0b38b832-e405-4967-9a32-6a627d9c19d2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2g5rq" podUID="0b38b832-e405-4967-9a32-6a627d9c19d2"
	Nov 15 10:35:40 no-preload-283677 kubelet[821]: I1115 10:35:40.157145     821 scope.go:117] "RemoveContainer" containerID="2ed35452acbea6332ff49a4e3b850561f71e2992e41503717b83a4170ecdae97"
	Nov 15 10:35:48 no-preload-283677 kubelet[821]: I1115 10:35:48.896860     821 scope.go:117] "RemoveContainer" containerID="8328e5d7622543edd42c5075bda24e795b23b86ebfab1de17757659a27b5e8dd"
	Nov 15 10:35:48 no-preload-283677 kubelet[821]: E1115 10:35:48.897063     821 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2g5rq_kubernetes-dashboard(0b38b832-e405-4967-9a32-6a627d9c19d2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2g5rq" podUID="0b38b832-e405-4967-9a32-6a627d9c19d2"
	Nov 15 10:36:01 no-preload-283677 kubelet[821]: I1115 10:36:01.896364     821 scope.go:117] "RemoveContainer" containerID="8328e5d7622543edd42c5075bda24e795b23b86ebfab1de17757659a27b5e8dd"
	Nov 15 10:36:02 no-preload-283677 kubelet[821]: I1115 10:36:02.214871     821 scope.go:117] "RemoveContainer" containerID="8328e5d7622543edd42c5075bda24e795b23b86ebfab1de17757659a27b5e8dd"
	Nov 15 10:36:02 no-preload-283677 kubelet[821]: I1115 10:36:02.215104     821 scope.go:117] "RemoveContainer" containerID="2d20d5dc1c2b66cabb2156b9f8d8669dcb25cbd73f94552cea3da087e47a37c2"
	Nov 15 10:36:02 no-preload-283677 kubelet[821]: E1115 10:36:02.215301     821 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2g5rq_kubernetes-dashboard(0b38b832-e405-4967-9a32-6a627d9c19d2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2g5rq" podUID="0b38b832-e405-4967-9a32-6a627d9c19d2"
	Nov 15 10:36:04 no-preload-283677 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 10:36:04 no-preload-283677 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 10:36:04 no-preload-283677 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [72e788657e34c4ceb611b3c182b01cfe009c0ebba075aa6c882e7e27152c31ee] <==
	2025/11/15 10:35:22 Using namespace: kubernetes-dashboard
	2025/11/15 10:35:22 Using in-cluster config to connect to apiserver
	2025/11/15 10:35:22 Using secret token for csrf signing
	2025/11/15 10:35:22 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/15 10:35:22 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/15 10:35:22 Successful initial request to the apiserver, version: v1.34.1
	2025/11/15 10:35:22 Generating JWE encryption key
	2025/11/15 10:35:22 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/15 10:35:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/15 10:35:22 Initializing JWE encryption key from synchronized object
	2025/11/15 10:35:22 Creating in-cluster Sidecar client
	2025/11/15 10:35:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 10:35:22 Serving insecurely on HTTP port: 9090
	2025/11/15 10:35:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 10:35:22 Starting overwatch
	
	
	==> storage-provisioner [2ed35452acbea6332ff49a4e3b850561f71e2992e41503717b83a4170ecdae97] <==
	I1115 10:35:09.588925       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1115 10:35:39.592868       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [8bd2c710a2c580a8988b5b3071d9f3587a5b7a4d80023277e15840a6b9f2c4fa] <==
	I1115 10:35:40.214232       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 10:35:40.214286       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1115 10:35:40.216533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:43.671781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:47.931835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:51.529817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:54.583087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:57.606268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:57.611525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:35:57.611671       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 10:35:57.611773       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ca833280-d4c1-43fb-bae2-a3f123cb9113", APIVersion:"v1", ResourceVersion:"641", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-283677_992ef63f-1a8a-4666-97db-42a83525fa09 became leader
	I1115 10:35:57.611815       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-283677_992ef63f-1a8a-4666-97db-42a83525fa09!
	W1115 10:35:57.615332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:57.618869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:35:57.711933       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-283677_992ef63f-1a8a-4666-97db-42a83525fa09!
	W1115 10:35:59.621622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:59.625421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:01.628989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:01.634243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:03.637888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:03.642011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:05.645146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:05.649994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:07.654280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:07.659270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-283677 -n no-preload-283677
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-283677 -n no-preload-283677: exit status 2 (385.015392ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-283677 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-283677
helpers_test.go:243: (dbg) docker inspect no-preload-283677:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5be6667f097090011551b6e80bf165ef8b8393ba894fdd8185eba7e40ac44832",
	        "Created": "2025-11-15T10:33:34.248576658Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 369308,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:34:57.408433102Z",
	            "FinishedAt": "2025-11-15T10:34:55.946285352Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/5be6667f097090011551b6e80bf165ef8b8393ba894fdd8185eba7e40ac44832/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5be6667f097090011551b6e80bf165ef8b8393ba894fdd8185eba7e40ac44832/hostname",
	        "HostsPath": "/var/lib/docker/containers/5be6667f097090011551b6e80bf165ef8b8393ba894fdd8185eba7e40ac44832/hosts",
	        "LogPath": "/var/lib/docker/containers/5be6667f097090011551b6e80bf165ef8b8393ba894fdd8185eba7e40ac44832/5be6667f097090011551b6e80bf165ef8b8393ba894fdd8185eba7e40ac44832-json.log",
	        "Name": "/no-preload-283677",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-283677:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-283677",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5be6667f097090011551b6e80bf165ef8b8393ba894fdd8185eba7e40ac44832",
	                "LowerDir": "/var/lib/docker/overlay2/45ca12b29ace88526ba1031080c77398d7e970e8d9cdc89fe714afcbb97fdb8d-init/diff:/var/lib/docker/overlay2/507c85b6fb43d9c98216fac79d7a8d08dd20b2a63d0fbfc758336e5d38c04044/diff",
	                "MergedDir": "/var/lib/docker/overlay2/45ca12b29ace88526ba1031080c77398d7e970e8d9cdc89fe714afcbb97fdb8d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/45ca12b29ace88526ba1031080c77398d7e970e8d9cdc89fe714afcbb97fdb8d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/45ca12b29ace88526ba1031080c77398d7e970e8d9cdc89fe714afcbb97fdb8d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-283677",
	                "Source": "/var/lib/docker/volumes/no-preload-283677/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-283677",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-283677",
	                "name.minikube.sigs.k8s.io": "no-preload-283677",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "8264dc33a3a532080f0ae6aff4eee6a056fb0dd7b5e521e3e96566788f2aa5ec",
	            "SandboxKey": "/var/run/docker/netns/8264dc33a3a5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-283677": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "31f43b80693175788eae574d1283c9772486f60a6f30b977a4f67f74c18220c7",
	                    "EndpointID": "a3ba68cfdc24803d972cef60a33a802c5d4b70629f0fcc30cdc85eae21f7a8d0",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "3a:fc:15:1d:5e:6a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-283677",
	                        "5be6667f0970"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-283677 -n no-preload-283677
E1115 10:36:09.321450   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/auto-931243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-283677 -n no-preload-283677: exit status 2 (399.800035ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-283677 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-283677 logs -n 25: (1.215159286s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-931243 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ ssh     │ -p bridge-931243 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo containerd config dump                                                                                                                                                                                                  │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo crio config                                                                                                                                                                                                             │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ delete  │ -p bridge-931243                                                                                                                                                                                                                              │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ delete  │ -p disable-driver-mounts-435527                                                                                                                                                                                                               │ disable-driver-mounts-435527 │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ start   │ -p default-k8s-diff-port-026691 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-026691 │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:36 UTC │
	│ addons  │ enable dashboard -p no-preload-283677 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ start   │ -p no-preload-283677 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:35 UTC │
	│ addons  │ enable metrics-server -p embed-certs-719574 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ stop    │ -p embed-certs-719574 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ image   │ old-k8s-version-087235 image list --format=json                                                                                                                                                                                               │ old-k8s-version-087235       │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ pause   │ -p old-k8s-version-087235 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-087235       │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ delete  │ -p old-k8s-version-087235                                                                                                                                                                                                                     │ old-k8s-version-087235       │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ addons  │ enable dashboard -p embed-certs-719574 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ start   │ -p embed-certs-719574 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ delete  │ -p old-k8s-version-087235                                                                                                                                                                                                                     │ old-k8s-version-087235       │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ start   │ -p newest-cni-086099 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ image   │ no-preload-283677 image list --format=json                                                                                                                                                                                                    │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ pause   │ -p no-preload-283677 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:35:51
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:35:51.880635  378695 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:35:51.880972  378695 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:35:51.880985  378695 out.go:374] Setting ErrFile to fd 2...
	I1115 10:35:51.880990  378695 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:35:51.881260  378695 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 10:35:51.881819  378695 out.go:368] Setting JSON to false
	I1115 10:35:51.883178  378695 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8289,"bootTime":1763194663,"procs":305,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 10:35:51.883287  378695 start.go:143] virtualization: kvm guest
	I1115 10:35:51.885121  378695 out.go:179] * [newest-cni-086099] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 10:35:51.886362  378695 notify.go:221] Checking for updates...
	I1115 10:35:51.886418  378695 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 10:35:51.887691  378695 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:35:51.888785  378695 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:35:51.889883  378695 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-55448/.minikube
	I1115 10:35:51.891041  378695 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 10:35:51.895496  378695 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:35:51.897243  378695 config.go:182] Loaded profile config "default-k8s-diff-port-026691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:51.897400  378695 config.go:182] Loaded profile config "embed-certs-719574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:51.897562  378695 config.go:182] Loaded profile config "no-preload-283677": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:51.897686  378695 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:35:51.923206  378695 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 10:35:51.923309  378695 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:35:51.980066  378695 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:true NGoroutines:77 SystemTime:2025-11-15 10:35:51.97030866 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be removed
by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:35:51.980169  378695 docker.go:319] overlay module found
	I1115 10:35:51.982196  378695 out.go:179] * Using the docker driver based on user configuration
	I1115 10:35:51.983355  378695 start.go:309] selected driver: docker
	I1115 10:35:51.983369  378695 start.go:930] validating driver "docker" against <nil>
	I1115 10:35:51.983380  378695 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:35:51.984213  378695 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:35:52.044923  378695 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:76 SystemTime:2025-11-15 10:35:52.034876039 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:35:52.045179  378695 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1115 10:35:52.045216  378695 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1115 10:35:52.045457  378695 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1115 10:35:52.047189  378695 out.go:179] * Using Docker driver with root privileges
	I1115 10:35:52.048407  378695 cni.go:84] Creating CNI manager for ""
	I1115 10:35:52.048473  378695 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:35:52.048484  378695 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 10:35:52.048535  378695 start.go:353] cluster config:
	{Name:newest-cni-086099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-086099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:35:52.049826  378695 out.go:179] * Starting "newest-cni-086099" primary control-plane node in "newest-cni-086099" cluster
	I1115 10:35:52.050909  378695 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:35:52.052056  378695 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:35:52.053065  378695 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:35:52.053098  378695 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1115 10:35:52.053116  378695 cache.go:65] Caching tarball of preloaded images
	I1115 10:35:52.053151  378695 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:35:52.053229  378695 preload.go:238] Found /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 10:35:52.053246  378695 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:35:52.053398  378695 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/config.json ...
	I1115 10:35:52.053424  378695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/config.json: {Name:mkf8d02e5e19217377f4420029b0cc1adccada68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:52.074755  378695 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:35:52.074774  378695 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:35:52.074789  378695 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:35:52.074816  378695 start.go:360] acquireMachinesLock for newest-cni-086099: {Name:mk9065475199777f18a95aabcc9dbfda12f72647 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:35:52.074909  378695 start.go:364] duration metric: took 76.491µs to acquireMachinesLock for "newest-cni-086099"
	I1115 10:35:52.074932  378695 start.go:93] Provisioning new machine with config: &{Name:newest-cni-086099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-086099 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:35:52.075027  378695 start.go:125] createHost starting for "" (driver="docker")
	W1115 10:35:48.630700  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:50.630784  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	I1115 10:35:51.131341  368849 pod_ready.go:94] pod "coredns-66bc5c9577-66nkj" is "Ready"
	I1115 10:35:51.131376  368849 pod_ready.go:86] duration metric: took 41.005975825s for pod "coredns-66bc5c9577-66nkj" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:51.134231  368849 pod_ready.go:83] waiting for pod "etcd-no-preload-283677" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:51.138317  368849 pod_ready.go:94] pod "etcd-no-preload-283677" is "Ready"
	I1115 10:35:51.138345  368849 pod_ready.go:86] duration metric: took 4.088368ms for pod "etcd-no-preload-283677" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:51.140317  368849 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-283677" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:51.143990  368849 pod_ready.go:94] pod "kube-apiserver-no-preload-283677" is "Ready"
	I1115 10:35:51.144012  368849 pod_ready.go:86] duration metric: took 3.672536ms for pod "kube-apiserver-no-preload-283677" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:51.145780  368849 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-283677" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:51.329880  368849 pod_ready.go:94] pod "kube-controller-manager-no-preload-283677" is "Ready"
	I1115 10:35:51.329907  368849 pod_ready.go:86] duration metric: took 184.110671ms for pod "kube-controller-manager-no-preload-283677" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:51.529891  368849 pod_ready.go:83] waiting for pod "kube-proxy-vjbxg" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:51.929529  368849 pod_ready.go:94] pod "kube-proxy-vjbxg" is "Ready"
	I1115 10:35:51.929559  368849 pod_ready.go:86] duration metric: took 399.636424ms for pod "kube-proxy-vjbxg" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 10:35:49.488114  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	W1115 10:35:51.988145  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	I1115 10:35:52.129598  368849 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-283677" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:52.529568  368849 pod_ready.go:94] pod "kube-scheduler-no-preload-283677" is "Ready"
	I1115 10:35:52.529597  368849 pod_ready.go:86] duration metric: took 399.970584ms for pod "kube-scheduler-no-preload-283677" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:52.529608  368849 pod_ready.go:40] duration metric: took 42.409442772s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:35:52.581745  368849 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:35:52.583831  368849 out.go:179] * Done! kubectl is now configured to use "no-preload-283677" cluster and "default" namespace by default
	I1115 10:35:49.830432  377744 out.go:252] * Restarting existing docker container for "embed-certs-719574" ...
	I1115 10:35:49.830517  377744 cli_runner.go:164] Run: docker start embed-certs-719574
	I1115 10:35:50.114791  377744 cli_runner.go:164] Run: docker container inspect embed-certs-719574 --format={{.State.Status}}
	I1115 10:35:50.134754  377744 kic.go:430] container "embed-certs-719574" state is running.
	I1115 10:35:50.135204  377744 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-719574
	I1115 10:35:50.154606  377744 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/embed-certs-719574/config.json ...
	I1115 10:35:50.154928  377744 machine.go:94] provisionDockerMachine start ...
	I1115 10:35:50.155043  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:50.174749  377744 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:50.175176  377744 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1115 10:35:50.175216  377744 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:35:50.176012  377744 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38764->127.0.0.1:33119: read: connection reset by peer
	I1115 10:35:53.310173  377744 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-719574
	
	I1115 10:35:53.310214  377744 ubuntu.go:182] provisioning hostname "embed-certs-719574"
	I1115 10:35:53.310354  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:53.329392  377744 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:53.329615  377744 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1115 10:35:53.329634  377744 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-719574 && echo "embed-certs-719574" | sudo tee /etc/hostname
	I1115 10:35:53.472294  377744 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-719574
	
	I1115 10:35:53.472411  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:53.492862  377744 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:53.493213  377744 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1115 10:35:53.493264  377744 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-719574' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-719574/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-719574' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:35:53.625059  377744 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:35:53.625092  377744 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-55448/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-55448/.minikube}
	I1115 10:35:53.625126  377744 ubuntu.go:190] setting up certificates
	I1115 10:35:53.625143  377744 provision.go:84] configureAuth start
	I1115 10:35:53.625244  377744 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-719574
	I1115 10:35:53.644516  377744 provision.go:143] copyHostCerts
	I1115 10:35:53.644586  377744 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem, removing ...
	I1115 10:35:53.644598  377744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem
	I1115 10:35:53.644672  377744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem (1082 bytes)
	I1115 10:35:53.644781  377744 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem, removing ...
	I1115 10:35:53.644790  377744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem
	I1115 10:35:53.644816  377744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem (1123 bytes)
	I1115 10:35:53.644891  377744 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem, removing ...
	I1115 10:35:53.644898  377744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem
	I1115 10:35:53.644921  377744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem (1679 bytes)
	I1115 10:35:53.645022  377744 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem org=jenkins.embed-certs-719574 san=[127.0.0.1 192.168.94.2 embed-certs-719574 localhost minikube]
	I1115 10:35:53.893496  377744 provision.go:177] copyRemoteCerts
	I1115 10:35:53.893597  377744 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:35:53.893653  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:53.913597  377744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/embed-certs-719574/id_rsa Username:docker}
	I1115 10:35:54.011809  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:35:54.029841  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1115 10:35:54.048781  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1115 10:35:54.067015  377744 provision.go:87] duration metric: took 441.854991ms to configureAuth
	I1115 10:35:54.067059  377744 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:35:54.067256  377744 config.go:182] Loaded profile config "embed-certs-719574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:54.067376  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:54.087249  377744 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:54.087454  377744 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1115 10:35:54.087469  377744 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:35:54.383177  377744 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:35:54.383205  377744 machine.go:97] duration metric: took 4.228252503s to provisionDockerMachine
	I1115 10:35:54.383221  377744 start.go:293] postStartSetup for "embed-certs-719574" (driver="docker")
	I1115 10:35:54.383246  377744 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:35:54.383323  377744 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:35:54.383389  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:54.402613  377744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/embed-certs-719574/id_rsa Username:docker}
	I1115 10:35:54.497991  377744 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:35:54.501812  377744 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:35:54.501845  377744 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:35:54.501859  377744 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/addons for local assets ...
	I1115 10:35:54.501927  377744 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/files for local assets ...
	I1115 10:35:54.502073  377744 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem -> 589622.pem in /etc/ssl/certs
	I1115 10:35:54.502192  377744 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:35:54.510401  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:35:54.528845  377744 start.go:296] duration metric: took 145.608503ms for postStartSetup
	I1115 10:35:54.528929  377744 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:35:54.529033  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:54.548704  377744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/embed-certs-719574/id_rsa Username:docker}
	I1115 10:35:52.076936  378695 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 10:35:52.077138  378695 start.go:159] libmachine.API.Create for "newest-cni-086099" (driver="docker")
	I1115 10:35:52.077166  378695 client.go:173] LocalClient.Create starting
	I1115 10:35:52.077242  378695 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem
	I1115 10:35:52.077273  378695 main.go:143] libmachine: Decoding PEM data...
	I1115 10:35:52.077289  378695 main.go:143] libmachine: Parsing certificate...
	I1115 10:35:52.077346  378695 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem
	I1115 10:35:52.077364  378695 main.go:143] libmachine: Decoding PEM data...
	I1115 10:35:52.077373  378695 main.go:143] libmachine: Parsing certificate...
	I1115 10:35:52.077693  378695 cli_runner.go:164] Run: docker network inspect newest-cni-086099 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 10:35:52.094513  378695 cli_runner.go:211] docker network inspect newest-cni-086099 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 10:35:52.094577  378695 network_create.go:284] running [docker network inspect newest-cni-086099] to gather additional debugging logs...
	I1115 10:35:52.094597  378695 cli_runner.go:164] Run: docker network inspect newest-cni-086099
	W1115 10:35:52.112168  378695 cli_runner.go:211] docker network inspect newest-cni-086099 returned with exit code 1
	I1115 10:35:52.112212  378695 network_create.go:287] error running [docker network inspect newest-cni-086099]: docker network inspect newest-cni-086099: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-086099 not found
	I1115 10:35:52.112227  378695 network_create.go:289] output of [docker network inspect newest-cni-086099]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-086099 not found
	
	** /stderr **
	I1115 10:35:52.112312  378695 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:35:52.130531  378695 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-77644897380e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:42:f9:e2:83:c3:91} reservation:<nil>}
	I1115 10:35:52.131072  378695 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e808d03b2052 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:03:24:ef:92:a1} reservation:<nil>}
	I1115 10:35:52.131784  378695 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3fb45eeaa66e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:1a:b3:b7:0f:2e:99} reservation:<nil>}
	I1115 10:35:52.132406  378695 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-31f43b806931 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:0a:cc:8c:d8:0d:c5} reservation:<nil>}
	I1115 10:35:52.133098  378695 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-a057ad05bea0 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:4e:4d:10:e4:db:cb} reservation:<nil>}
	I1115 10:35:52.133911  378695 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-5402d8c1e78a IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:0a:f0:66:0a:22:a5} reservation:<nil>}
	I1115 10:35:52.134802  378695 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f61840}
	I1115 10:35:52.134825  378695 network_create.go:124] attempt to create docker network newest-cni-086099 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1115 10:35:52.134865  378695 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-086099 newest-cni-086099
	I1115 10:35:52.184306  378695 network_create.go:108] docker network newest-cni-086099 192.168.103.0/24 created
	I1115 10:35:52.184341  378695 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-086099" container
	I1115 10:35:52.184418  378695 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 10:35:52.204038  378695 cli_runner.go:164] Run: docker volume create newest-cni-086099 --label name.minikube.sigs.k8s.io=newest-cni-086099 --label created_by.minikube.sigs.k8s.io=true
	I1115 10:35:52.223076  378695 oci.go:103] Successfully created a docker volume newest-cni-086099
	I1115 10:35:52.223154  378695 cli_runner.go:164] Run: docker run --rm --name newest-cni-086099-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-086099 --entrypoint /usr/bin/test -v newest-cni-086099:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 10:35:52.620626  378695 oci.go:107] Successfully prepared a docker volume newest-cni-086099
	I1115 10:35:52.620689  378695 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:35:52.620707  378695 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 10:35:52.620778  378695 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-086099:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1115 10:35:54.641677  377744 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:35:54.646490  377744 fix.go:56] duration metric: took 4.836578375s for fixHost
	I1115 10:35:54.646531  377744 start.go:83] releasing machines lock for "embed-certs-719574", held for 4.836643994s
	I1115 10:35:54.646605  377744 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-719574
	I1115 10:35:54.665925  377744 ssh_runner.go:195] Run: cat /version.json
	I1115 10:35:54.666009  377744 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:35:54.666054  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:54.666061  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:54.685752  377744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/embed-certs-719574/id_rsa Username:docker}
	I1115 10:35:54.686933  377744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/embed-certs-719574/id_rsa Username:docker}
	I1115 10:35:54.832262  377744 ssh_runner.go:195] Run: systemctl --version
	I1115 10:35:54.839294  377744 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:35:54.881869  377744 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:35:54.887543  377744 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:35:54.887616  377744 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:35:54.897470  377744 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 10:35:54.897495  377744 start.go:496] detecting cgroup driver to use...
	I1115 10:35:54.897526  377744 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:35:54.897575  377744 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:35:54.915183  377744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:35:54.936918  377744 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:35:54.937042  377744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:35:54.959514  377744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:35:54.974364  377744 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:35:55.064629  377744 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:35:55.149431  377744 docker.go:234] disabling docker service ...
	I1115 10:35:55.149491  377744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:35:55.164826  377744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:35:55.178539  377744 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:35:55.258146  377744 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:35:55.336854  377744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:35:55.350099  377744 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:35:55.371361  377744 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:35:55.371428  377744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:55.392170  377744 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:35:55.392226  377744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:55.402091  377744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:55.464259  377744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:55.527554  377744 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:35:55.536601  377744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:55.581816  377744 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:55.591398  377744 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:55.656666  377744 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:35:55.665181  377744 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:35:55.673411  377744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:55.753200  377744 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:35:57.278236  377744 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.524976792s)
	I1115 10:35:57.278272  377744 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:35:57.278324  377744 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:35:57.282657  377744 start.go:564] Will wait 60s for crictl version
	I1115 10:35:57.282733  377744 ssh_runner.go:195] Run: which crictl
	I1115 10:35:57.286574  377744 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:35:57.314817  377744 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:35:57.314911  377744 ssh_runner.go:195] Run: crio --version
	I1115 10:35:57.343990  377744 ssh_runner.go:195] Run: crio --version
	I1115 10:35:57.373426  377744 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1115 10:35:54.488332  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	W1115 10:35:56.987904  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	I1115 10:35:57.378513  377744 cli_runner.go:164] Run: docker network inspect embed-certs-719574 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:35:57.402028  377744 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1115 10:35:57.409345  377744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:35:57.420512  377744 kubeadm.go:884] updating cluster {Name:embed-certs-719574 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-719574 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:35:57.420680  377744 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:35:57.420740  377744 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:35:57.458228  377744 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:35:57.458259  377744 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:35:57.458316  377744 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:35:57.485027  377744 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:35:57.485050  377744 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:35:57.485058  377744 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1115 10:35:57.485169  377744 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-719574 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-719574 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:35:57.485252  377744 ssh_runner.go:195] Run: crio config
	I1115 10:35:57.536095  377744 cni.go:84] Creating CNI manager for ""
	I1115 10:35:57.536127  377744 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:35:57.536147  377744 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:35:57.536177  377744 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-719574 NodeName:embed-certs-719574 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:35:57.536329  377744 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-719574"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:35:57.536407  377744 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:35:57.544702  377744 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:35:57.544775  377744 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:35:57.554019  377744 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1115 10:35:57.569040  377744 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:35:57.585285  377744 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1115 10:35:57.600345  377744 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:35:57.604627  377744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:35:57.619569  377744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:57.710162  377744 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:35:57.731269  377744 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/embed-certs-719574 for IP: 192.168.94.2
	I1115 10:35:57.731297  377744 certs.go:195] generating shared ca certs ...
	I1115 10:35:57.731319  377744 certs.go:227] acquiring lock for ca certs: {Name:mkec8c0ab2b564f4fafe1f7fc86089efa994f6db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:57.731508  377744 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key
	I1115 10:35:57.731564  377744 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key
	I1115 10:35:57.731581  377744 certs.go:257] generating profile certs ...
	I1115 10:35:57.731700  377744 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/embed-certs-719574/client.key
	I1115 10:35:57.731784  377744 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/embed-certs-719574/apiserver.key.788254b7
	I1115 10:35:57.731906  377744 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/embed-certs-719574/proxy-client.key
	I1115 10:35:57.732110  377744 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem (1338 bytes)
	W1115 10:35:57.732161  377744 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962_empty.pem, impossibly tiny 0 bytes
	I1115 10:35:57.732182  377744 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:35:57.732220  377744 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:35:57.732263  377744 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:35:57.732297  377744 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem (1679 bytes)
	I1115 10:35:57.732354  377744 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:35:57.733199  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:35:57.753928  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:35:57.776212  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:35:57.798569  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:35:57.855574  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/embed-certs-719574/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1115 10:35:57.881192  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/embed-certs-719574/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 10:35:57.958309  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/embed-certs-719574/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:35:57.978725  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/embed-certs-719574/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:35:58.001721  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:35:58.020846  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem --> /usr/share/ca-certificates/58962.pem (1338 bytes)
	I1115 10:35:58.039367  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /usr/share/ca-certificates/589622.pem (1708 bytes)
	I1115 10:35:58.064830  377744 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:35:58.080795  377744 ssh_runner.go:195] Run: openssl version
	I1115 10:35:58.087121  377744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:35:58.095754  377744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:35:58.099496  377744 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:41 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:35:58.099554  377744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:35:58.135273  377744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:35:58.145763  377744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/58962.pem && ln -fs /usr/share/ca-certificates/58962.pem /etc/ssl/certs/58962.pem"
	I1115 10:35:58.156943  377744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/58962.pem
	I1115 10:35:58.161920  377744 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:47 /usr/share/ca-certificates/58962.pem
	I1115 10:35:58.162041  377744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/58962.pem
	I1115 10:35:58.206129  377744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/58962.pem /etc/ssl/certs/51391683.0"
	I1115 10:35:58.214420  377744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/589622.pem && ln -fs /usr/share/ca-certificates/589622.pem /etc/ssl/certs/589622.pem"
	I1115 10:35:58.223061  377744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/589622.pem
	I1115 10:35:58.226827  377744 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:47 /usr/share/ca-certificates/589622.pem
	I1115 10:35:58.226872  377744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/589622.pem
	I1115 10:35:58.268503  377744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/589622.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:35:58.278233  377744 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:35:58.282629  377744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 10:35:58.349655  377744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 10:35:58.454042  377744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 10:35:58.576363  377744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 10:35:58.746644  377744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 10:35:58.782106  377744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 10:35:58.871080  377744 kubeadm.go:401] StartCluster: {Name:embed-certs-719574 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-719574 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:35:58.871213  377744 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:35:58.871280  377744 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:35:58.960244  377744 cri.go:89] found id: "34a183c86eaa1af613ae4708887513840319366cbb4179e7f1678698297eade2"
	I1115 10:35:58.960271  377744 cri.go:89] found id: "a04037d02e2f108f2bc0bc86b223e37c201372e3009a0b55923609e9c3f5b7b5"
	I1115 10:35:58.960278  377744 cri.go:89] found id: "d2523d5b7384ac5c1a3b025b86aa00148daa68b776dfada0c3e9b0dd751d444c"
	I1115 10:35:58.960283  377744 cri.go:89] found id: "56627175c47b813bf4460ca13a901ec3152cbb4d22f0362e40133db8b19b3e87"
	I1115 10:35:58.960298  377744 cri.go:89] found id: ""
	I1115 10:35:58.960336  377744 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 10:35:58.974645  377744 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:35:58Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:35:58.974767  377744 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:35:59.046786  377744 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 10:35:59.046808  377744 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 10:35:59.046859  377744 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 10:35:59.056636  377744 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:35:59.057549  377744 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-719574" does not appear in /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:35:59.058047  377744 kubeconfig.go:62] /home/jenkins/minikube-integration/21894-55448/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-719574" cluster setting kubeconfig missing "embed-certs-719574" context setting]
	I1115 10:35:59.058858  377744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:59.060778  377744 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 10:35:59.069779  377744 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1115 10:35:59.069815  377744 kubeadm.go:602] duration metric: took 22.998235ms to restartPrimaryControlPlane
	I1115 10:35:59.069826  377744 kubeadm.go:403] duration metric: took 198.758279ms to StartCluster
	I1115 10:35:59.069846  377744 settings.go:142] acquiring lock: {Name:mk94cfac0f6eef2479181468a7a8082a6e5c8f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:59.069922  377744 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:35:59.071492  377744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:59.071756  377744 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:35:59.071888  377744 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:35:59.072018  377744 config.go:182] Loaded profile config "embed-certs-719574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:59.072030  377744 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-719574"
	I1115 10:35:59.072050  377744 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-719574"
	W1115 10:35:59.072059  377744 addons.go:248] addon storage-provisioner should already be in state true
	I1115 10:35:59.072081  377744 addons.go:70] Setting dashboard=true in profile "embed-certs-719574"
	I1115 10:35:59.072126  377744 addons.go:239] Setting addon dashboard=true in "embed-certs-719574"
	W1115 10:35:59.072141  377744 addons.go:248] addon dashboard should already be in state true
	I1115 10:35:59.072091  377744 host.go:66] Checking if "embed-certs-719574" exists ...
	I1115 10:35:59.072176  377744 host.go:66] Checking if "embed-certs-719574" exists ...
	I1115 10:35:59.072082  377744 addons.go:70] Setting default-storageclass=true in profile "embed-certs-719574"
	I1115 10:35:59.072227  377744 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-719574"
	I1115 10:35:59.072560  377744 cli_runner.go:164] Run: docker container inspect embed-certs-719574 --format={{.State.Status}}
	I1115 10:35:59.072736  377744 cli_runner.go:164] Run: docker container inspect embed-certs-719574 --format={{.State.Status}}
	I1115 10:35:59.072775  377744 cli_runner.go:164] Run: docker container inspect embed-certs-719574 --format={{.State.Status}}
	I1115 10:35:59.073400  377744 out.go:179] * Verifying Kubernetes components...
	I1115 10:35:59.074646  377744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:59.097674  377744 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1115 10:35:59.097741  377744 addons.go:239] Setting addon default-storageclass=true in "embed-certs-719574"
	W1115 10:35:59.097755  377744 addons.go:248] addon default-storageclass should already be in state true
	I1115 10:35:59.097682  377744 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:35:59.097790  377744 host.go:66] Checking if "embed-certs-719574" exists ...
	I1115 10:35:59.098261  377744 cli_runner.go:164] Run: docker container inspect embed-certs-719574 --format={{.State.Status}}
	I1115 10:35:59.098922  377744 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:35:59.098988  377744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:35:59.099040  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:59.103435  377744 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1115 10:35:59.104647  377744 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1115 10:35:59.104679  377744 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1115 10:35:59.104749  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:59.119302  377744 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:35:59.119331  377744 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:35:59.119398  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:59.120171  377744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/embed-certs-719574/id_rsa Username:docker}
	I1115 10:35:59.125098  377744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/embed-certs-719574/id_rsa Username:docker}
	I1115 10:35:59.137515  377744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/embed-certs-719574/id_rsa Username:docker}
	I1115 10:35:59.461029  377744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:35:59.461402  377744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:35:59.465397  377744 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1115 10:35:59.465421  377744 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1115 10:35:59.550018  377744 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:35:59.557165  377744 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1115 10:35:59.557200  377744 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1115 10:35:57.180648  378695 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-086099:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.559815955s)
	I1115 10:35:57.180688  378695 kic.go:203] duration metric: took 4.559978988s to extract preloaded images to volume ...
	W1115 10:35:57.180808  378695 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1115 10:35:57.180907  378695 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 10:35:57.245170  378695 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-086099 --name newest-cni-086099 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-086099 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-086099 --network newest-cni-086099 --ip 192.168.103.2 --volume newest-cni-086099:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 10:35:57.553341  378695 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Running}}
	I1115 10:35:57.574001  378695 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:35:57.595723  378695 cli_runner.go:164] Run: docker exec newest-cni-086099 stat /var/lib/dpkg/alternatives/iptables
	I1115 10:35:57.648675  378695 oci.go:144] the created container "newest-cni-086099" has a running status.
	I1115 10:35:57.648711  378695 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa...
	I1115 10:35:57.758503  378695 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 10:35:57.788103  378695 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:35:57.813502  378695 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 10:35:57.813525  378695 kic_runner.go:114] Args: [docker exec --privileged newest-cni-086099 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 10:35:57.866879  378695 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:35:57.892578  378695 machine.go:94] provisionDockerMachine start ...
	I1115 10:35:57.892683  378695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:35:57.916142  378695 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:57.916445  378695 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1115 10:35:57.916463  378695 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:35:57.917246  378695 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47936->127.0.0.1:33124: read: connection reset by peer
	I1115 10:36:01.055800  378695 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-086099
	
	I1115 10:36:01.055829  378695 ubuntu.go:182] provisioning hostname "newest-cni-086099"
	I1115 10:36:01.055909  378695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:01.077686  378695 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:01.078023  378695 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1115 10:36:01.078042  378695 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-086099 && echo "newest-cni-086099" | sudo tee /etc/hostname
	I1115 10:36:01.223717  378695 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-086099
	
	I1115 10:36:01.223807  378695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:01.242452  378695 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:01.242668  378695 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1115 10:36:01.242685  378695 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-086099' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-086099/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-086099' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:36:01.376856  378695 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:36:01.376893  378695 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-55448/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-55448/.minikube}
	I1115 10:36:01.376932  378695 ubuntu.go:190] setting up certificates
	I1115 10:36:01.376976  378695 provision.go:84] configureAuth start
	I1115 10:36:01.377048  378695 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-086099
	I1115 10:36:01.398840  378695 provision.go:143] copyHostCerts
	I1115 10:36:01.398983  378695 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem, removing ...
	I1115 10:36:01.399002  378695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem
	I1115 10:36:01.399077  378695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem (1082 bytes)
	I1115 10:36:01.399173  378695 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem, removing ...
	I1115 10:36:01.399183  378695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem
	I1115 10:36:01.399217  378695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem (1123 bytes)
	I1115 10:36:01.399290  378695 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem, removing ...
	I1115 10:36:01.399300  378695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem
	I1115 10:36:01.399336  378695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem (1679 bytes)
	I1115 10:36:01.399416  378695 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem org=jenkins.newest-cni-086099 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-086099]
	I1115 10:36:01.599358  378695 provision.go:177] copyRemoteCerts
	I1115 10:36:01.599429  378695 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:36:01.599467  378695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:01.617920  378695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:01.714257  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1115 10:36:01.736832  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1115 10:36:01.771414  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:36:01.789744  378695 provision.go:87] duration metric: took 412.746889ms to configureAuth
	I1115 10:36:01.789780  378695 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:36:01.790004  378695 config.go:182] Loaded profile config "newest-cni-086099": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:01.790111  378695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:01.807644  378695 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:01.807895  378695 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1115 10:36:01.807913  378695 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1115 10:35:59.487887  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	W1115 10:36:01.488245  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	I1115 10:36:01.988676  367608 node_ready.go:49] node "default-k8s-diff-port-026691" is "Ready"
	I1115 10:36:01.988712  367608 node_ready.go:38] duration metric: took 40.004362414s for node "default-k8s-diff-port-026691" to be "Ready" ...
	I1115 10:36:01.988728  367608 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:36:01.988785  367608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:36:02.002727  367608 api_server.go:72] duration metric: took 41.048135621s to wait for apiserver process to appear ...
	I1115 10:36:02.002761  367608 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:36:02.002786  367608 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1115 10:36:02.007061  367608 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1115 10:36:02.008035  367608 api_server.go:141] control plane version: v1.34.1
	I1115 10:36:02.008064  367608 api_server.go:131] duration metric: took 5.294787ms to wait for apiserver health ...
	I1115 10:36:02.008076  367608 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:36:02.011683  367608 system_pods.go:59] 8 kube-system pods found
	I1115 10:36:02.011713  367608 system_pods.go:61] "coredns-66bc5c9577-5q2j4" [e6c4ca54-e0fe-45ee-88a6-33bdccbb876c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:36:02.011719  367608 system_pods.go:61] "etcd-default-k8s-diff-port-026691" [f8228efa-91a3-41fb-9952-3b4063ddf162] Running
	I1115 10:36:02.011725  367608 system_pods.go:61] "kindnet-hjdrk" [9e1f7579-f5a2-44cd-b77f-71219cd8827d] Running
	I1115 10:36:02.011729  367608 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-026691" [9da86b3b-bfcd-4c40-ac86-ee9d9faeb9b0] Running
	I1115 10:36:02.011732  367608 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-026691" [23d9be87-1de1-4c38-b65e-b084cf0fed25] Running
	I1115 10:36:02.011737  367608 system_pods.go:61] "kube-proxy-c5bw5" [ee48d34b-ae60-4a03-a7bd-df76e089eebb] Running
	I1115 10:36:02.011741  367608 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-026691" [52b3711f-015c-4af6-8672-58642cbc0c16] Running
	I1115 10:36:02.011747  367608 system_pods.go:61] "storage-provisioner" [7dedf7a9-415d-4260-b225-7ca171744768] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:36:02.011757  367608 system_pods.go:74] duration metric: took 3.675183ms to wait for pod list to return data ...
	I1115 10:36:02.011767  367608 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:36:02.014095  367608 default_sa.go:45] found service account: "default"
	I1115 10:36:02.014113  367608 default_sa.go:55] duration metric: took 2.338136ms for default service account to be created ...
	I1115 10:36:02.014121  367608 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:36:02.016619  367608 system_pods.go:86] 8 kube-system pods found
	I1115 10:36:02.016644  367608 system_pods.go:89] "coredns-66bc5c9577-5q2j4" [e6c4ca54-e0fe-45ee-88a6-33bdccbb876c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:36:02.016650  367608 system_pods.go:89] "etcd-default-k8s-diff-port-026691" [f8228efa-91a3-41fb-9952-3b4063ddf162] Running
	I1115 10:36:02.016657  367608 system_pods.go:89] "kindnet-hjdrk" [9e1f7579-f5a2-44cd-b77f-71219cd8827d] Running
	I1115 10:36:02.016663  367608 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-026691" [9da86b3b-bfcd-4c40-ac86-ee9d9faeb9b0] Running
	I1115 10:36:02.016668  367608 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-026691" [23d9be87-1de1-4c38-b65e-b084cf0fed25] Running
	I1115 10:36:02.016676  367608 system_pods.go:89] "kube-proxy-c5bw5" [ee48d34b-ae60-4a03-a7bd-df76e089eebb] Running
	I1115 10:36:02.016681  367608 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-026691" [52b3711f-015c-4af6-8672-58642cbc0c16] Running
	I1115 10:36:02.016692  367608 system_pods.go:89] "storage-provisioner" [7dedf7a9-415d-4260-b225-7ca171744768] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:36:02.016714  367608 retry.go:31] will retry after 218.810216ms: missing components: kube-dns
	I1115 10:36:02.239606  367608 system_pods.go:86] 8 kube-system pods found
	I1115 10:36:02.239636  367608 system_pods.go:89] "coredns-66bc5c9577-5q2j4" [e6c4ca54-e0fe-45ee-88a6-33bdccbb876c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:36:02.239642  367608 system_pods.go:89] "etcd-default-k8s-diff-port-026691" [f8228efa-91a3-41fb-9952-3b4063ddf162] Running
	I1115 10:36:02.239648  367608 system_pods.go:89] "kindnet-hjdrk" [9e1f7579-f5a2-44cd-b77f-71219cd8827d] Running
	I1115 10:36:02.239654  367608 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-026691" [9da86b3b-bfcd-4c40-ac86-ee9d9faeb9b0] Running
	I1115 10:36:02.239657  367608 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-026691" [23d9be87-1de1-4c38-b65e-b084cf0fed25] Running
	I1115 10:36:02.239661  367608 system_pods.go:89] "kube-proxy-c5bw5" [ee48d34b-ae60-4a03-a7bd-df76e089eebb] Running
	I1115 10:36:02.239665  367608 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-026691" [52b3711f-015c-4af6-8672-58642cbc0c16] Running
	I1115 10:36:02.239671  367608 system_pods.go:89] "storage-provisioner" [7dedf7a9-415d-4260-b225-7ca171744768] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:36:02.239689  367608 retry.go:31] will retry after 377.391978ms: missing components: kube-dns
	I1115 10:35:59.653179  377744 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1115 10:35:59.653211  377744 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1115 10:35:59.670277  377744 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1115 10:35:59.670303  377744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1115 10:35:59.757741  377744 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1115 10:35:59.757796  377744 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1115 10:35:59.771666  377744 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1115 10:35:59.771696  377744 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1115 10:35:59.844282  377744 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1115 10:35:59.844312  377744 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1115 10:35:59.859695  377744 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1115 10:35:59.859723  377744 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1115 10:35:59.873202  377744 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:35:59.873227  377744 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1115 10:35:59.887124  377744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:36:03.675772  377744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.21470192s)
	I1115 10:36:03.675861  377744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.214437385s)
	I1115 10:36:03.675941  377744 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.125882332s)
	I1115 10:36:03.676037  377744 node_ready.go:35] waiting up to 6m0s for node "embed-certs-719574" to be "Ready" ...
	I1115 10:36:03.676084  377744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.788916637s)
	I1115 10:36:03.677758  377744 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-719574 addons enable metrics-server
	
	I1115 10:36:03.686848  377744 node_ready.go:49] node "embed-certs-719574" is "Ready"
	I1115 10:36:03.686872  377744 node_ready.go:38] duration metric: took 10.779527ms for node "embed-certs-719574" to be "Ready" ...
	I1115 10:36:03.686888  377744 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:36:03.686937  377744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:36:03.688770  377744 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1115 10:36:02.108071  378695 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:36:02.108099  378695 machine.go:97] duration metric: took 4.215497724s to provisionDockerMachine
	I1115 10:36:02.108110  378695 client.go:176] duration metric: took 10.030938427s to LocalClient.Create
	I1115 10:36:02.108130  378695 start.go:167] duration metric: took 10.030994703s to libmachine.API.Create "newest-cni-086099"
	I1115 10:36:02.108137  378695 start.go:293] postStartSetup for "newest-cni-086099" (driver="docker")
	I1115 10:36:02.108146  378695 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:36:02.108214  378695 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:36:02.108252  378695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:02.126898  378695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:02.234226  378695 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:36:02.237991  378695 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:36:02.238025  378695 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:36:02.238037  378695 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/addons for local assets ...
	I1115 10:36:02.238104  378695 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/files for local assets ...
	I1115 10:36:02.238204  378695 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem -> 589622.pem in /etc/ssl/certs
	I1115 10:36:02.238321  378695 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:36:02.249461  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:36:02.279024  378695 start.go:296] duration metric: took 170.869278ms for postStartSetup
	I1115 10:36:02.279408  378695 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-086099
	I1115 10:36:02.299580  378695 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/config.json ...
	I1115 10:36:02.299869  378695 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:36:02.299927  378695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:02.318249  378695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:02.419697  378695 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:36:02.424780  378695 start.go:128] duration metric: took 10.349732709s to createHost
	I1115 10:36:02.424816  378695 start.go:83] releasing machines lock for "newest-cni-086099", held for 10.349888861s
	I1115 10:36:02.424894  378695 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-086099
	I1115 10:36:02.442707  378695 ssh_runner.go:195] Run: cat /version.json
	I1115 10:36:02.442769  378695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:02.442774  378695 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:36:02.442838  378695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:02.475405  378695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:02.476482  378695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:02.627684  378695 ssh_runner.go:195] Run: systemctl --version
	I1115 10:36:02.635318  378695 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:36:02.690380  378695 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:36:02.695343  378695 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:36:02.695404  378695 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:36:02.723025  378695 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1115 10:36:02.723047  378695 start.go:496] detecting cgroup driver to use...
	I1115 10:36:02.723077  378695 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:36:02.723116  378695 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:36:02.740027  378695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:36:02.757082  378695 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:36:02.757147  378695 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:36:02.780790  378695 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:36:02.800005  378695 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:36:02.903918  378695 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:36:03.008676  378695 docker.go:234] disabling docker service ...
	I1115 10:36:03.008735  378695 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:36:03.029417  378695 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:36:03.042351  378695 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:36:03.141887  378695 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:36:03.242543  378695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:36:03.261558  378695 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:36:03.281222  378695 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:36:03.281289  378695 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:03.292850  378695 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:36:03.292913  378695 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:03.302308  378695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:03.312080  378695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:03.321520  378695 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:36:03.330371  378695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:03.339342  378695 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:03.358403  378695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:03.370875  378695 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:36:03.382720  378695 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:36:03.392373  378695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:03.490238  378695 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:36:03.612676  378695 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:36:03.612751  378695 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:36:03.616844  378695 start.go:564] Will wait 60s for crictl version
	I1115 10:36:03.616906  378695 ssh_runner.go:195] Run: which crictl
	I1115 10:36:03.620519  378695 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:36:03.647994  378695 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:36:03.648098  378695 ssh_runner.go:195] Run: crio --version
	I1115 10:36:03.681466  378695 ssh_runner.go:195] Run: crio --version
	I1115 10:36:03.715909  378695 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:36:03.717677  378695 cli_runner.go:164] Run: docker network inspect newest-cni-086099 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:36:03.737236  378695 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1115 10:36:03.741562  378695 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:36:03.754243  378695 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1115 10:36:02.621370  367608 system_pods.go:86] 8 kube-system pods found
	I1115 10:36:02.621401  367608 system_pods.go:89] "coredns-66bc5c9577-5q2j4" [e6c4ca54-e0fe-45ee-88a6-33bdccbb876c] Running
	I1115 10:36:02.621407  367608 system_pods.go:89] "etcd-default-k8s-diff-port-026691" [f8228efa-91a3-41fb-9952-3b4063ddf162] Running
	I1115 10:36:02.621412  367608 system_pods.go:89] "kindnet-hjdrk" [9e1f7579-f5a2-44cd-b77f-71219cd8827d] Running
	I1115 10:36:02.621416  367608 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-026691" [9da86b3b-bfcd-4c40-ac86-ee9d9faeb9b0] Running
	I1115 10:36:02.621421  367608 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-026691" [23d9be87-1de1-4c38-b65e-b084cf0fed25] Running
	I1115 10:36:02.621424  367608 system_pods.go:89] "kube-proxy-c5bw5" [ee48d34b-ae60-4a03-a7bd-df76e089eebb] Running
	I1115 10:36:02.621428  367608 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-026691" [52b3711f-015c-4af6-8672-58642cbc0c16] Running
	I1115 10:36:02.621431  367608 system_pods.go:89] "storage-provisioner" [7dedf7a9-415d-4260-b225-7ca171744768] Running
	I1115 10:36:02.621439  367608 system_pods.go:126] duration metric: took 607.311685ms to wait for k8s-apps to be running ...
	I1115 10:36:02.621445  367608 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:36:02.621494  367608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:36:02.636245  367608 system_svc.go:56] duration metric: took 14.790396ms WaitForService to wait for kubelet
	I1115 10:36:02.636277  367608 kubeadm.go:587] duration metric: took 41.681692299s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:36:02.636317  367608 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:36:02.639743  367608 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:36:02.639770  367608 node_conditions.go:123] node cpu capacity is 8
	I1115 10:36:02.639786  367608 node_conditions.go:105] duration metric: took 3.46192ms to run NodePressure ...
	I1115 10:36:02.639802  367608 start.go:242] waiting for startup goroutines ...
	I1115 10:36:02.639815  367608 start.go:247] waiting for cluster config update ...
	I1115 10:36:02.639834  367608 start.go:256] writing updated cluster config ...
	I1115 10:36:02.640167  367608 ssh_runner.go:195] Run: rm -f paused
	I1115 10:36:02.644506  367608 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:36:02.649994  367608 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5q2j4" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:02.656679  367608 pod_ready.go:94] pod "coredns-66bc5c9577-5q2j4" is "Ready"
	I1115 10:36:02.656844  367608 pod_ready.go:86] duration metric: took 6.756741ms for pod "coredns-66bc5c9577-5q2j4" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:02.659798  367608 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:02.665415  367608 pod_ready.go:94] pod "etcd-default-k8s-diff-port-026691" is "Ready"
	I1115 10:36:02.665516  367608 pod_ready.go:86] duration metric: took 5.656754ms for pod "etcd-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:02.669115  367608 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:02.675621  367608 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-026691" is "Ready"
	I1115 10:36:02.675649  367608 pod_ready.go:86] duration metric: took 6.472611ms for pod "kube-apiserver-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:02.678236  367608 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:03.050408  367608 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-026691" is "Ready"
	I1115 10:36:03.050447  367608 pod_ready.go:86] duration metric: took 372.139168ms for pod "kube-controller-manager-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:03.250079  367608 pod_ready.go:83] waiting for pod "kube-proxy-c5bw5" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:03.649856  367608 pod_ready.go:94] pod "kube-proxy-c5bw5" is "Ready"
	I1115 10:36:03.649889  367608 pod_ready.go:86] duration metric: took 399.777083ms for pod "kube-proxy-c5bw5" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:03.850318  367608 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:04.249888  367608 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-026691" is "Ready"
	I1115 10:36:04.249914  367608 pod_ready.go:86] duration metric: took 399.564892ms for pod "kube-scheduler-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:04.249926  367608 pod_ready.go:40] duration metric: took 1.605379763s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:36:04.304218  367608 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:36:04.306183  367608 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-026691" cluster and "default" namespace by default
	I1115 10:36:03.689851  377744 addons.go:515] duration metric: took 4.61797682s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1115 10:36:03.700992  377744 api_server.go:72] duration metric: took 4.62919911s to wait for apiserver process to appear ...
	I1115 10:36:03.701014  377744 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:36:03.701034  377744 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1115 10:36:03.705295  377744 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1115 10:36:03.706367  377744 api_server.go:141] control plane version: v1.34.1
	I1115 10:36:03.706398  377744 api_server.go:131] duration metric: took 5.374158ms to wait for apiserver health ...
	I1115 10:36:03.706409  377744 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:36:03.710047  377744 system_pods.go:59] 8 kube-system pods found
	I1115 10:36:03.710083  377744 system_pods.go:61] "coredns-66bc5c9577-fjzk5" [d4d185bc-88ec-4edb-b250-6a59ee426bf5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:36:03.710095  377744 system_pods.go:61] "etcd-embed-certs-719574" [293cb467-4f15-417b-944e-457c3ac7e56f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:36:03.710106  377744 system_pods.go:61] "kindnet-ql2r4" [224e2951-6c97-449d-8ff8-f72aa6d36d60] Running
	I1115 10:36:03.710122  377744 system_pods.go:61] "kube-apiserver-embed-certs-719574" [43a83385-040c-470f-8242-5066617ac8d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:36:03.710135  377744 system_pods.go:61] "kube-controller-manager-embed-certs-719574" [78f16cd1-774e-4556-9db6-bca54bc8214b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:36:03.710141  377744 system_pods.go:61] "kube-proxy-kmc8c" [3534c76c-4b99-4b84-ba00-21d0d49e770f] Running
	I1115 10:36:03.710147  377744 system_pods.go:61] "kube-scheduler-embed-certs-719574" [9b8d2d78-442d-48cc-88be-29b9eb7017e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:36:03.710158  377744 system_pods.go:61] "storage-provisioner" [39c3baf2-24de-475e-aeef-a10825991ca3] Running
	I1115 10:36:03.710165  377744 system_pods.go:74] duration metric: took 3.749108ms to wait for pod list to return data ...
	I1115 10:36:03.710174  377744 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:36:03.712493  377744 default_sa.go:45] found service account: "default"
	I1115 10:36:03.712513  377744 default_sa.go:55] duration metric: took 2.331314ms for default service account to be created ...
	I1115 10:36:03.712522  377744 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:36:03.715355  377744 system_pods.go:86] 8 kube-system pods found
	I1115 10:36:03.715378  377744 system_pods.go:89] "coredns-66bc5c9577-fjzk5" [d4d185bc-88ec-4edb-b250-6a59ee426bf5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:36:03.715386  377744 system_pods.go:89] "etcd-embed-certs-719574" [293cb467-4f15-417b-944e-457c3ac7e56f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:36:03.715391  377744 system_pods.go:89] "kindnet-ql2r4" [224e2951-6c97-449d-8ff8-f72aa6d36d60] Running
	I1115 10:36:03.715398  377744 system_pods.go:89] "kube-apiserver-embed-certs-719574" [43a83385-040c-470f-8242-5066617ac8d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:36:03.715405  377744 system_pods.go:89] "kube-controller-manager-embed-certs-719574" [78f16cd1-774e-4556-9db6-bca54bc8214b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:36:03.715412  377744 system_pods.go:89] "kube-proxy-kmc8c" [3534c76c-4b99-4b84-ba00-21d0d49e770f] Running
	I1115 10:36:03.715417  377744 system_pods.go:89] "kube-scheduler-embed-certs-719574" [9b8d2d78-442d-48cc-88be-29b9eb7017e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:36:03.715427  377744 system_pods.go:89] "storage-provisioner" [39c3baf2-24de-475e-aeef-a10825991ca3] Running
	I1115 10:36:03.715435  377744 system_pods.go:126] duration metric: took 2.908753ms to wait for k8s-apps to be running ...
	I1115 10:36:03.715443  377744 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:36:03.715482  377744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:36:03.729079  377744 system_svc.go:56] duration metric: took 13.624714ms WaitForService to wait for kubelet
	I1115 10:36:03.729108  377744 kubeadm.go:587] duration metric: took 4.657317817s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:36:03.729130  377744 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:36:03.732380  377744 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:36:03.732409  377744 node_conditions.go:123] node cpu capacity is 8
	I1115 10:36:03.732424  377744 node_conditions.go:105] duration metric: took 3.288836ms to run NodePressure ...
	I1115 10:36:03.732439  377744 start.go:242] waiting for startup goroutines ...
	I1115 10:36:03.732448  377744 start.go:247] waiting for cluster config update ...
	I1115 10:36:03.732463  377744 start.go:256] writing updated cluster config ...
	I1115 10:36:03.732754  377744 ssh_runner.go:195] Run: rm -f paused
	I1115 10:36:03.737164  377744 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:36:03.740586  377744 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fjzk5" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:03.755299  378695 kubeadm.go:884] updating cluster {Name:newest-cni-086099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-086099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:36:03.755432  378695 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:36:03.755482  378695 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:36:03.794722  378695 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:36:03.794749  378695 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:36:03.794805  378695 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:36:03.826109  378695 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:36:03.826142  378695 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:36:03.826153  378695 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1115 10:36:03.826264  378695 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-086099 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-086099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:36:03.826354  378695 ssh_runner.go:195] Run: crio config
	I1115 10:36:03.879671  378695 cni.go:84] Creating CNI manager for ""
	I1115 10:36:03.879701  378695 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:36:03.879717  378695 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1115 10:36:03.879739  378695 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-086099 NodeName:newest-cni-086099 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:36:03.879883  378695 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-086099"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:36:03.879988  378695 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:36:03.888992  378695 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:36:03.889052  378695 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:36:03.897294  378695 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1115 10:36:03.911151  378695 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:36:03.930297  378695 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1115 10:36:03.945072  378695 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:36:03.948706  378695 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:36:03.959243  378695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:04.058938  378695 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:36:04.093857  378695 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099 for IP: 192.168.103.2
	I1115 10:36:04.093888  378695 certs.go:195] generating shared ca certs ...
	I1115 10:36:04.093909  378695 certs.go:227] acquiring lock for ca certs: {Name:mkec8c0ab2b564f4fafe1f7fc86089efa994f6db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:04.094076  378695 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key
	I1115 10:36:04.094148  378695 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key
	I1115 10:36:04.094163  378695 certs.go:257] generating profile certs ...
	I1115 10:36:04.094230  378695 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/client.key
	I1115 10:36:04.094258  378695 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/client.crt with IP's: []
	I1115 10:36:04.385453  378695 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/client.crt ...
	I1115 10:36:04.385478  378695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/client.crt: {Name:mk40f6a053043aca087e720d3a4da44f4215e456 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:04.385623  378695 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/client.key ...
	I1115 10:36:04.385633  378695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/client.key: {Name:mk7ba7a9aed87498b12d0ea82f1fd16a2802adbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:04.385729  378695 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.key.d719cdad
	I1115 10:36:04.385749  378695 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.crt.d719cdad with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1115 10:36:04.782829  378695 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.crt.d719cdad ...
	I1115 10:36:04.782863  378695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.crt.d719cdad: {Name:mkcdec4fb6d5949c6190ac10a0f9caeb369ef1df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:04.783103  378695 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.key.d719cdad ...
	I1115 10:36:04.783129  378695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.key.d719cdad: {Name:mk74203e2c301a3a488fc95324a401039fa8106d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:04.783253  378695 certs.go:382] copying /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.crt.d719cdad -> /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.crt
	I1115 10:36:04.783373  378695 certs.go:386] copying /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.key.d719cdad -> /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.key
	I1115 10:36:04.783463  378695 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.key
	I1115 10:36:04.783486  378695 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.crt with IP's: []
	I1115 10:36:04.900301  378695 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.crt ...
	I1115 10:36:04.900329  378695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.crt: {Name:mk0d5b4842614d84db6a4d32b9e40b0ee2961026 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:04.900527  378695 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.key ...
	I1115 10:36:04.900547  378695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.key: {Name:mkc0cf01fd3204cf2eb33c45d49bdb1a3af7d389 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:04.900769  378695 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem (1338 bytes)
	W1115 10:36:04.900806  378695 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962_empty.pem, impossibly tiny 0 bytes
	I1115 10:36:04.900817  378695 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:36:04.900837  378695 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:36:04.900863  378695 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:36:04.900884  378695 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem (1679 bytes)
	I1115 10:36:04.900931  378695 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:36:04.901498  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:36:04.920490  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:36:04.938524  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:36:04.956167  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:36:04.974935  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1115 10:36:04.995270  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 10:36:05.016110  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:36:05.034440  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:36:05.051948  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem --> /usr/share/ca-certificates/58962.pem (1338 bytes)
	I1115 10:36:05.071136  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /usr/share/ca-certificates/589622.pem (1708 bytes)
	I1115 10:36:05.100067  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:36:05.120144  378695 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:36:05.133751  378695 ssh_runner.go:195] Run: openssl version
	I1115 10:36:05.140442  378695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:36:05.150520  378695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:05.155339  378695 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:41 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:05.155411  378695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:05.205520  378695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:36:05.214306  378695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/58962.pem && ln -fs /usr/share/ca-certificates/58962.pem /etc/ssl/certs/58962.pem"
	I1115 10:36:05.222589  378695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/58962.pem
	I1115 10:36:05.226661  378695 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:47 /usr/share/ca-certificates/58962.pem
	I1115 10:36:05.226723  378695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/58962.pem
	I1115 10:36:05.269094  378695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/58962.pem /etc/ssl/certs/51391683.0"
	I1115 10:36:05.282750  378695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/589622.pem && ln -fs /usr/share/ca-certificates/589622.pem /etc/ssl/certs/589622.pem"
	I1115 10:36:05.291785  378695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/589622.pem
	I1115 10:36:05.295742  378695 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:47 /usr/share/ca-certificates/589622.pem
	I1115 10:36:05.295801  378695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/589622.pem
	I1115 10:36:05.341059  378695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/589622.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:36:05.352931  378695 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:36:05.357729  378695 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 10:36:05.357794  378695 kubeadm.go:401] StartCluster: {Name:newest-cni-086099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-086099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:36:05.357898  378695 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:36:05.358038  378695 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:36:05.389342  378695 cri.go:89] found id: ""
	I1115 10:36:05.389409  378695 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:36:05.399176  378695 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 10:36:05.407568  378695 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 10:36:05.407619  378695 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 10:36:05.415732  378695 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 10:36:05.415750  378695 kubeadm.go:158] found existing configuration files:
	
	I1115 10:36:05.415789  378695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 10:36:05.423933  378695 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 10:36:05.424003  378695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 10:36:05.431425  378695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 10:36:05.439333  378695 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 10:36:05.439396  378695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 10:36:05.446777  378695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 10:36:05.454437  378695 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 10:36:05.454481  378695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 10:36:05.461644  378695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 10:36:05.468875  378695 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 10:36:05.468937  378695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 10:36:05.476821  378695 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 10:36:05.516431  378695 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 10:36:05.516536  378695 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 10:36:05.536153  378695 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 10:36:05.536251  378695 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1115 10:36:05.536322  378695 kubeadm.go:319] OS: Linux
	I1115 10:36:05.536373  378695 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 10:36:05.536430  378695 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1115 10:36:05.536519  378695 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 10:36:05.536598  378695 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 10:36:05.536682  378695 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 10:36:05.536769  378695 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 10:36:05.536832  378695 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 10:36:05.536877  378695 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 10:36:05.536920  378695 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1115 10:36:05.598690  378695 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 10:36:05.598871  378695 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 10:36:05.599041  378695 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 10:36:05.606076  378695 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1115 10:36:05.608588  378695 out.go:252]   - Generating certificates and keys ...
	I1115 10:36:05.608685  378695 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 10:36:05.608773  378695 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 10:36:06.648403  378695 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 10:36:06.817549  378695 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	
	
	==> CRI-O <==
	Nov 15 10:35:39 no-preload-283677 conmon[1240]: conmon 2ed35452acbea6332ff4 <ninfo>: container 1242 exited with status 1
	Nov 15 10:35:40 no-preload-283677 crio[676]: time="2025-11-15T10:35:40.157502626Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=df1e6dd1-7318-4c8b-91bc-5ffa9cf64224 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:35:40 no-preload-283677 crio[676]: time="2025-11-15T10:35:40.158502996Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=86b1da1c-cb4a-4ffe-8488-9d2d62f4f127 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:35:40 no-preload-283677 crio[676]: time="2025-11-15T10:35:40.159615726Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=16fb2d70-5a33-420e-a0b6-2b420fe2dec8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:35:40 no-preload-283677 crio[676]: time="2025-11-15T10:35:40.159755894Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:35:40 no-preload-283677 crio[676]: time="2025-11-15T10:35:40.166297094Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:35:40 no-preload-283677 crio[676]: time="2025-11-15T10:35:40.166487289Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/328acd9dd634cca1ed733c1b4af1466bc7c6b10d95e2574f93fc6d7dcaaf8618/merged/etc/passwd: no such file or directory"
	Nov 15 10:35:40 no-preload-283677 crio[676]: time="2025-11-15T10:35:40.166527059Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/328acd9dd634cca1ed733c1b4af1466bc7c6b10d95e2574f93fc6d7dcaaf8618/merged/etc/group: no such file or directory"
	Nov 15 10:35:40 no-preload-283677 crio[676]: time="2025-11-15T10:35:40.166833868Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:35:40 no-preload-283677 crio[676]: time="2025-11-15T10:35:40.191673371Z" level=info msg="Created container 8bd2c710a2c580a8988b5b3071d9f3587a5b7a4d80023277e15840a6b9f2c4fa: kube-system/storage-provisioner/storage-provisioner" id=16fb2d70-5a33-420e-a0b6-2b420fe2dec8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:35:40 no-preload-283677 crio[676]: time="2025-11-15T10:35:40.192406472Z" level=info msg="Starting container: 8bd2c710a2c580a8988b5b3071d9f3587a5b7a4d80023277e15840a6b9f2c4fa" id=84a58085-b82d-4069-b967-66203fe35312 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:35:40 no-preload-283677 crio[676]: time="2025-11-15T10:35:40.194226185Z" level=info msg="Started container" PID=1853 containerID=8bd2c710a2c580a8988b5b3071d9f3587a5b7a4d80023277e15840a6b9f2c4fa description=kube-system/storage-provisioner/storage-provisioner id=84a58085-b82d-4069-b967-66203fe35312 name=/runtime.v1.RuntimeService/StartContainer sandboxID=13b8bae9216512b5bf4758ca3a1dfaa68cca71d6c3811941f471827761cc754a
	Nov 15 10:36:01 no-preload-283677 crio[676]: time="2025-11-15T10:36:01.896877596Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=157d570e-5001-43a2-84fa-58861c49160c name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:36:01 no-preload-283677 crio[676]: time="2025-11-15T10:36:01.898066116Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=780c95a1-a0d0-4d69-b9ca-08903fb67ee4 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:36:01 no-preload-283677 crio[676]: time="2025-11-15T10:36:01.899156108Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2g5rq/dashboard-metrics-scraper" id=5550b067-6351-45f6-b925-c6e6f82dd105 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:36:01 no-preload-283677 crio[676]: time="2025-11-15T10:36:01.89929904Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:01 no-preload-283677 crio[676]: time="2025-11-15T10:36:01.907302702Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:01 no-preload-283677 crio[676]: time="2025-11-15T10:36:01.9079356Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:01 no-preload-283677 crio[676]: time="2025-11-15T10:36:01.926226277Z" level=info msg="Created container 2d20d5dc1c2b66cabb2156b9f8d8669dcb25cbd73f94552cea3da087e47a37c2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2g5rq/dashboard-metrics-scraper" id=5550b067-6351-45f6-b925-c6e6f82dd105 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:36:01 no-preload-283677 crio[676]: time="2025-11-15T10:36:01.927181675Z" level=info msg="Starting container: 2d20d5dc1c2b66cabb2156b9f8d8669dcb25cbd73f94552cea3da087e47a37c2" id=8b0f5df2-1489-418e-82fd-d84cbfc35fcc name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:36:01 no-preload-283677 crio[676]: time="2025-11-15T10:36:01.929651368Z" level=info msg="Started container" PID=1890 containerID=2d20d5dc1c2b66cabb2156b9f8d8669dcb25cbd73f94552cea3da087e47a37c2 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2g5rq/dashboard-metrics-scraper id=8b0f5df2-1489-418e-82fd-d84cbfc35fcc name=/runtime.v1.RuntimeService/StartContainer sandboxID=8f018dcce94ab2eb56b37b5ecd10329c9069076fb33aafd7641bcecfc92ae8ae
	Nov 15 10:36:01 no-preload-283677 conmon[1888]: conmon 2d20d5dc1c2b66cabb21 <ninfo>: container 1890 exited with status 1
	Nov 15 10:36:02 no-preload-283677 crio[676]: time="2025-11-15T10:36:02.216570162Z" level=info msg="Removing container: 8328e5d7622543edd42c5075bda24e795b23b86ebfab1de17757659a27b5e8dd" id=8da1d56f-d4b0-4f97-976e-aee2890deff7 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:36:02 no-preload-283677 crio[676]: time="2025-11-15T10:36:02.222726231Z" level=info msg="Error loading conmon cgroup of container 8328e5d7622543edd42c5075bda24e795b23b86ebfab1de17757659a27b5e8dd: cgroup deleted" id=8da1d56f-d4b0-4f97-976e-aee2890deff7 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:36:02 no-preload-283677 crio[676]: time="2025-11-15T10:36:02.226421848Z" level=info msg="Removed container 8328e5d7622543edd42c5075bda24e795b23b86ebfab1de17757659a27b5e8dd: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2g5rq/dashboard-metrics-scraper" id=8da1d56f-d4b0-4f97-976e-aee2890deff7 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	2d20d5dc1c2b6       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           8 seconds ago        Exited              dashboard-metrics-scraper   3                   8f018dcce94ab       dashboard-metrics-scraper-6ffb444bf9-2g5rq   kubernetes-dashboard
	8bd2c710a2c58       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           29 seconds ago       Running             storage-provisioner         2                   13b8bae921651       storage-provisioner                          kube-system
	72e788657e34c       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   47 seconds ago       Running             kubernetes-dashboard        0                   193e8217312fd       kubernetes-dashboard-855c9754f9-2q95v        kubernetes-dashboard
	a1b1db57d4972       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           About a minute ago   Running             coredns                     1                   2fa3b9c8ec41c       coredns-66bc5c9577-66nkj                     kube-system
	122754c749135       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           About a minute ago   Running             busybox                     1                   1be0704e3445b       busybox                                      default
	e19aa2c491434       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           About a minute ago   Running             kindnet-cni                 1                   28c885731f594       kindnet-x5rwg                                kube-system
	2ed35452acbea       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           About a minute ago   Exited              storage-provisioner         1                   13b8bae921651       storage-provisioner                          kube-system
	fbd534126f75a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           About a minute ago   Running             kube-proxy                  1                   4bf0ad2e7057d       kube-proxy-vjbxg                             kube-system
	324a3ff1cd89d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           About a minute ago   Running             etcd                        1                   fe46357c0d663       etcd-no-preload-283677                       kube-system
	8c532dc6e6980       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           About a minute ago   Running             kube-scheduler              1                   d0390893cc6f9       kube-scheduler-no-preload-283677             kube-system
	ac246fc71f81d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           About a minute ago   Running             kube-controller-manager     1                   69bfb06fee1e6       kube-controller-manager-no-preload-283677    kube-system
	c26ba954b1e2f       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           About a minute ago   Running             kube-apiserver              1                   4de9b34c6c186       kube-apiserver-no-preload-283677             kube-system
	
	
	==> coredns [a1b1db57d497261f854972caaaabfb2ff94437f156ebd9a824ae6eec9b4717be] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59996 - 52847 "HINFO IN 2498217211002336889.1691539149669243410. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.058792624s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               no-preload-283677
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-283677
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=no-preload-283677
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_34_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:34:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-283677
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:35:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:35:38 +0000   Sat, 15 Nov 2025 10:34:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:35:38 +0000   Sat, 15 Nov 2025 10:34:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:35:38 +0000   Sat, 15 Nov 2025 10:34:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:35:38 +0000   Sat, 15 Nov 2025 10:34:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-283677
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                24a4b1bc-3dc5-430d-9221-78b09868633f
	  Boot ID:                    6a33f504-3a8c-4d49-8fc5-301cf5275efc
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 coredns-66bc5c9577-66nkj                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     117s
	  kube-system                 etcd-no-preload-283677                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m2s
	  kube-system                 kindnet-x5rwg                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      117s
	  kube-system                 kube-apiserver-no-preload-283677              250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-controller-manager-no-preload-283677     200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-proxy-vjbxg                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-scheduler-no-preload-283677              100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-2g5rq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-2q95v         0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 115s               kube-proxy       
	  Normal   Starting                 60s                kube-proxy       
	  Normal   Starting                 2m3s               kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m3s               kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     2m2s               kubelet          Node no-preload-283677 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m2s               kubelet          Node no-preload-283677 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  2m2s               kubelet          Node no-preload-283677 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           118s               node-controller  Node no-preload-283677 event: Registered Node no-preload-283677 in Controller
	  Normal   NodeReady                102s               kubelet          Node no-preload-283677 status is now: NodeReady
	  Normal   Starting                 67s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 67s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  66s (x8 over 67s)  kubelet          Node no-preload-283677 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    66s (x8 over 67s)  kubelet          Node no-preload-283677 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     66s (x8 over 67s)  kubelet          Node no-preload-283677 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           58s                node-controller  Node no-preload-283677 event: Registered Node no-preload-283677 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 7f 53 f9 15 93 08 06
	[  +0.017821] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff c6 66 25 e9 45 28 08 06
	[ +19.178294] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 a3 65 b9 28 15 08 06
	[  +0.347929] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 6d df 33 a5 ad 08 06
	[  +0.000319] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff a2 7f 53 f9 15 93 08 06
	[Nov15 10:33] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 96 ed 10 e8 9a 80 08 06
	[  +0.000481] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 a3 65 b9 28 15 08 06
	[ +35.214217] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 c2 9a 56 52 0c 08 06
	[  +9.104720] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 c2 c4 d1 7c dd 08 06
	[Nov15 10:34] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 68 1e 80 53 05 08 06
	[  +0.000444] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 c2 c4 d1 7c dd 08 06
	[ +18.836046] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 98 db 78 4f 0b 08 06
	[  +0.000708] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 c2 9a 56 52 0c 08 06
	
	
	==> etcd [324a3ff1cd89d80c17c217faf6db6f2b6c9f52f5abe13f2e83485e1e03b0c7aa] <==
	{"level":"warn","ts":"2025-11-15T10:35:06.916165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.924367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.984709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.991644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.999466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:07.006497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:07.015822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:07.025336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:07.033723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:07.073199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:07.081654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:07.089447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:07.096797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:07.105026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:07.112188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:07.120165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:07.131304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:07.140469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:07.172470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:07.187866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:07.199156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:07.208138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:07.215553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:07.223201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:07.293525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50122","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:36:10 up  2:18,  0 user,  load average: 3.95, 4.37, 2.78
	Linux no-preload-283677 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e19aa2c4914343607f446514b29eff501e18401aa8e8ae99efee7a13e1b84831] <==
	I1115 10:35:09.745587       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1115 10:35:09.745752       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:35:09.745768       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:35:09.745790       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:35:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:35:10.047282       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:35:10.047313       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:35:10.047327       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:35:10.047685       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1115 10:35:10.447809       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:35:10.447846       1 metrics.go:72] Registering metrics
	I1115 10:35:10.447936       1 controller.go:711] "Syncing nftables rules"
	I1115 10:35:20.046533       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 10:35:20.046589       1 main.go:301] handling current node
	I1115 10:35:30.047226       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 10:35:30.047276       1 main.go:301] handling current node
	I1115 10:35:40.047202       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 10:35:40.047252       1 main.go:301] handling current node
	I1115 10:35:50.054066       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 10:35:50.054124       1 main.go:301] handling current node
	I1115 10:36:00.054044       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 10:36:00.054076       1 main.go:301] handling current node
	I1115 10:36:10.046809       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 10:36:10.046845       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c26ba954b1e2fc1ea4ef12fc0801c0a31171b23e67cf48fdeb9207cbdb3ba3b0] <==
	I1115 10:35:07.995590       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1115 10:35:08.001492       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:35:08.067754       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1115 10:35:08.067913       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1115 10:35:08.068111       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1115 10:35:08.068132       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1115 10:35:08.068326       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1115 10:35:08.068907       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1115 10:35:08.068940       1 policy_source.go:240] refreshing policies
	I1115 10:35:08.069381       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 10:35:08.070105       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1115 10:35:08.072427       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1115 10:35:08.076758       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1115 10:35:08.076776       1 cache.go:39] Caches are synced for autoregister controller
	I1115 10:35:08.835038       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 10:35:08.861494       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:35:08.909236       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 10:35:09.069371       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 10:35:09.070226       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:35:09.103179       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:35:09.309601       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.247.49"}
	I1115 10:35:09.523802       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.82.231"}
	I1115 10:35:12.435281       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:35:12.683882       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 10:35:12.886280       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [ac246fc71f81d0fb5e3cd430730c903f2ca388376feb3a8dc321eb565aa6c5ee] <==
	I1115 10:35:12.270603       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1115 10:35:12.270681       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1115 10:35:12.270699       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 10:35:12.278317       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1115 10:35:12.278348       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1115 10:35:12.278354       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1115 10:35:12.278496       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1115 10:35:12.278528       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1115 10:35:12.279683       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1115 10:35:12.279762       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1115 10:35:12.279996       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1115 10:35:12.281971       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1115 10:35:12.283164       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1115 10:35:12.285442       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1115 10:35:12.286743       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:35:12.286752       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 10:35:12.288982       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1115 10:35:12.289071       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1115 10:35:12.289158       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-283677"
	I1115 10:35:12.289223       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1115 10:35:12.291368       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1115 10:35:12.316278       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:35:12.329516       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:35:12.329642       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 10:35:12.329658       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [fbd534126f75ad8fd1d5fdcbd5ef4977e3b134a0b5f0bb5ef906b59631045d73] <==
	I1115 10:35:09.586327       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:35:09.661280       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:35:09.762859       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:35:09.762900       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1115 10:35:09.763082       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:35:09.795648       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:35:09.795731       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:35:09.811610       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:35:09.819601       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:35:09.820070       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:35:09.824207       1 config.go:200] "Starting service config controller"
	I1115 10:35:09.824352       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:35:09.824460       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:35:09.824718       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:35:09.824788       1 config.go:309] "Starting node config controller"
	I1115 10:35:09.824819       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:35:09.825681       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:35:09.826579       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:35:09.826612       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:35:09.925896       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 10:35:09.930872       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 10:35:09.927002       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [8c532dc6e69808afe1a0ffb587f828c0b3f6fa37c71b2fbc5ce4abdafdedf008] <==
	I1115 10:35:05.573329       1 serving.go:386] Generated self-signed cert in-memory
	W1115 10:35:07.983757       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1115 10:35:07.983856       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	W1115 10:35:07.983889       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1115 10:35:07.983928       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1115 10:35:08.080459       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1115 10:35:08.080488       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:35:08.083406       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:35:08.083483       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:35:08.084594       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 10:35:08.084689       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 10:35:08.184415       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 10:35:17 no-preload-283677 kubelet[821]: I1115 10:35:17.089999     821 scope.go:117] "RemoveContainer" containerID="983ac1cf399ecf93330e2267f7ddf4d73213d8ac7cd14b1e9f060882ae9c8c7e"
	Nov 15 10:35:18 no-preload-283677 kubelet[821]: I1115 10:35:18.094375     821 scope.go:117] "RemoveContainer" containerID="983ac1cf399ecf93330e2267f7ddf4d73213d8ac7cd14b1e9f060882ae9c8c7e"
	Nov 15 10:35:18 no-preload-283677 kubelet[821]: I1115 10:35:18.094517     821 scope.go:117] "RemoveContainer" containerID="315f43f0dc2929afbeb2abbedd88bd1f6ac2617078c4a55ed7654db76deb9986"
	Nov 15 10:35:18 no-preload-283677 kubelet[821]: E1115 10:35:18.094697     821 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2g5rq_kubernetes-dashboard(0b38b832-e405-4967-9a32-6a627d9c19d2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2g5rq" podUID="0b38b832-e405-4967-9a32-6a627d9c19d2"
	Nov 15 10:35:19 no-preload-283677 kubelet[821]: I1115 10:35:19.098737     821 scope.go:117] "RemoveContainer" containerID="315f43f0dc2929afbeb2abbedd88bd1f6ac2617078c4a55ed7654db76deb9986"
	Nov 15 10:35:19 no-preload-283677 kubelet[821]: E1115 10:35:19.098918     821 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2g5rq_kubernetes-dashboard(0b38b832-e405-4967-9a32-6a627d9c19d2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2g5rq" podUID="0b38b832-e405-4967-9a32-6a627d9c19d2"
	Nov 15 10:35:20 no-preload-283677 kubelet[821]: I1115 10:35:20.101472     821 scope.go:117] "RemoveContainer" containerID="315f43f0dc2929afbeb2abbedd88bd1f6ac2617078c4a55ed7654db76deb9986"
	Nov 15 10:35:20 no-preload-283677 kubelet[821]: E1115 10:35:20.101760     821 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2g5rq_kubernetes-dashboard(0b38b832-e405-4967-9a32-6a627d9c19d2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2g5rq" podUID="0b38b832-e405-4967-9a32-6a627d9c19d2"
	Nov 15 10:35:23 no-preload-283677 kubelet[821]: I1115 10:35:23.119784     821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2q95v" podStartSLOduration=2.011255788 podStartE2EDuration="11.119762329s" podCreationTimestamp="2025-11-15 10:35:12 +0000 UTC" firstStartedPulling="2025-11-15 10:35:13.192334118 +0000 UTC m=+9.403773310" lastFinishedPulling="2025-11-15 10:35:22.300840657 +0000 UTC m=+18.512279851" observedRunningTime="2025-11-15 10:35:23.119686143 +0000 UTC m=+19.331125354" watchObservedRunningTime="2025-11-15 10:35:23.119762329 +0000 UTC m=+19.331201542"
	Nov 15 10:35:32 no-preload-283677 kubelet[821]: I1115 10:35:32.896043     821 scope.go:117] "RemoveContainer" containerID="315f43f0dc2929afbeb2abbedd88bd1f6ac2617078c4a55ed7654db76deb9986"
	Nov 15 10:35:33 no-preload-283677 kubelet[821]: I1115 10:35:33.136713     821 scope.go:117] "RemoveContainer" containerID="315f43f0dc2929afbeb2abbedd88bd1f6ac2617078c4a55ed7654db76deb9986"
	Nov 15 10:35:33 no-preload-283677 kubelet[821]: I1115 10:35:33.136938     821 scope.go:117] "RemoveContainer" containerID="8328e5d7622543edd42c5075bda24e795b23b86ebfab1de17757659a27b5e8dd"
	Nov 15 10:35:33 no-preload-283677 kubelet[821]: E1115 10:35:33.137163     821 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2g5rq_kubernetes-dashboard(0b38b832-e405-4967-9a32-6a627d9c19d2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2g5rq" podUID="0b38b832-e405-4967-9a32-6a627d9c19d2"
	Nov 15 10:35:38 no-preload-283677 kubelet[821]: I1115 10:35:38.618739     821 scope.go:117] "RemoveContainer" containerID="8328e5d7622543edd42c5075bda24e795b23b86ebfab1de17757659a27b5e8dd"
	Nov 15 10:35:38 no-preload-283677 kubelet[821]: E1115 10:35:38.618949     821 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2g5rq_kubernetes-dashboard(0b38b832-e405-4967-9a32-6a627d9c19d2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2g5rq" podUID="0b38b832-e405-4967-9a32-6a627d9c19d2"
	Nov 15 10:35:40 no-preload-283677 kubelet[821]: I1115 10:35:40.157145     821 scope.go:117] "RemoveContainer" containerID="2ed35452acbea6332ff49a4e3b850561f71e2992e41503717b83a4170ecdae97"
	Nov 15 10:35:48 no-preload-283677 kubelet[821]: I1115 10:35:48.896860     821 scope.go:117] "RemoveContainer" containerID="8328e5d7622543edd42c5075bda24e795b23b86ebfab1de17757659a27b5e8dd"
	Nov 15 10:35:48 no-preload-283677 kubelet[821]: E1115 10:35:48.897063     821 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2g5rq_kubernetes-dashboard(0b38b832-e405-4967-9a32-6a627d9c19d2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2g5rq" podUID="0b38b832-e405-4967-9a32-6a627d9c19d2"
	Nov 15 10:36:01 no-preload-283677 kubelet[821]: I1115 10:36:01.896364     821 scope.go:117] "RemoveContainer" containerID="8328e5d7622543edd42c5075bda24e795b23b86ebfab1de17757659a27b5e8dd"
	Nov 15 10:36:02 no-preload-283677 kubelet[821]: I1115 10:36:02.214871     821 scope.go:117] "RemoveContainer" containerID="8328e5d7622543edd42c5075bda24e795b23b86ebfab1de17757659a27b5e8dd"
	Nov 15 10:36:02 no-preload-283677 kubelet[821]: I1115 10:36:02.215104     821 scope.go:117] "RemoveContainer" containerID="2d20d5dc1c2b66cabb2156b9f8d8669dcb25cbd73f94552cea3da087e47a37c2"
	Nov 15 10:36:02 no-preload-283677 kubelet[821]: E1115 10:36:02.215301     821 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2g5rq_kubernetes-dashboard(0b38b832-e405-4967-9a32-6a627d9c19d2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2g5rq" podUID="0b38b832-e405-4967-9a32-6a627d9c19d2"
	Nov 15 10:36:04 no-preload-283677 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 10:36:04 no-preload-283677 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 10:36:04 no-preload-283677 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [72e788657e34c4ceb611b3c182b01cfe009c0ebba075aa6c882e7e27152c31ee] <==
	2025/11/15 10:35:22 Starting overwatch
	2025/11/15 10:35:22 Using namespace: kubernetes-dashboard
	2025/11/15 10:35:22 Using in-cluster config to connect to apiserver
	2025/11/15 10:35:22 Using secret token for csrf signing
	2025/11/15 10:35:22 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/15 10:35:22 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/15 10:35:22 Successful initial request to the apiserver, version: v1.34.1
	2025/11/15 10:35:22 Generating JWE encryption key
	2025/11/15 10:35:22 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/15 10:35:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/15 10:35:22 Initializing JWE encryption key from synchronized object
	2025/11/15 10:35:22 Creating in-cluster Sidecar client
	2025/11/15 10:35:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 10:35:22 Serving insecurely on HTTP port: 9090
	2025/11/15 10:35:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [2ed35452acbea6332ff49a4e3b850561f71e2992e41503717b83a4170ecdae97] <==
	I1115 10:35:09.588925       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1115 10:35:39.592868       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [8bd2c710a2c580a8988b5b3071d9f3587a5b7a4d80023277e15840a6b9f2c4fa] <==
	W1115 10:35:40.216533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:43.671781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:47.931835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:51.529817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:54.583087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:57.606268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:57.611525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:35:57.611671       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 10:35:57.611773       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ca833280-d4c1-43fb-bae2-a3f123cb9113", APIVersion:"v1", ResourceVersion:"641", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-283677_992ef63f-1a8a-4666-97db-42a83525fa09 became leader
	I1115 10:35:57.611815       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-283677_992ef63f-1a8a-4666-97db-42a83525fa09!
	W1115 10:35:57.615332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:57.618869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:35:57.711933       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-283677_992ef63f-1a8a-4666-97db-42a83525fa09!
	W1115 10:35:59.621622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:59.625421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:01.628989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:01.634243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:03.637888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:03.642011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:05.645146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:05.649994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:07.654280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:07.659270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:09.662898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:09.669995       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-283677 -n no-preload-283677
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-283677 -n no-preload-283677: exit status 2 (339.033966ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-283677 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-026691 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-026691 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (252.466936ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:36:16Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-026691 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-026691 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-026691 describe deploy/metrics-server -n kube-system: exit status 1 (70.761355ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-026691 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-026691
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-026691:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "acb25a518a850a99b04a9ecc3ee276eeb45082aa05801ad29c50dc8761212798",
	        "Created": "2025-11-15T10:34:56.785604479Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 368703,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:34:56.823315375Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/acb25a518a850a99b04a9ecc3ee276eeb45082aa05801ad29c50dc8761212798/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/acb25a518a850a99b04a9ecc3ee276eeb45082aa05801ad29c50dc8761212798/hostname",
	        "HostsPath": "/var/lib/docker/containers/acb25a518a850a99b04a9ecc3ee276eeb45082aa05801ad29c50dc8761212798/hosts",
	        "LogPath": "/var/lib/docker/containers/acb25a518a850a99b04a9ecc3ee276eeb45082aa05801ad29c50dc8761212798/acb25a518a850a99b04a9ecc3ee276eeb45082aa05801ad29c50dc8761212798-json.log",
	        "Name": "/default-k8s-diff-port-026691",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-026691:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-026691",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "acb25a518a850a99b04a9ecc3ee276eeb45082aa05801ad29c50dc8761212798",
	                "LowerDir": "/var/lib/docker/overlay2/330082c0af87d9272d4fc35061c9dcf149c94a075cbe50833b0301c28218115b-init/diff:/var/lib/docker/overlay2/507c85b6fb43d9c98216fac79d7a8d08dd20b2a63d0fbfc758336e5d38c04044/diff",
	                "MergedDir": "/var/lib/docker/overlay2/330082c0af87d9272d4fc35061c9dcf149c94a075cbe50833b0301c28218115b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/330082c0af87d9272d4fc35061c9dcf149c94a075cbe50833b0301c28218115b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/330082c0af87d9272d4fc35061c9dcf149c94a075cbe50833b0301c28218115b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-026691",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-026691/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-026691",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-026691",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-026691",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5b5e03a588cbb6199a14ece40cd65b51487f3f30ac364cf854f78dac21d1f8e0",
	            "SandboxKey": "/var/run/docker/netns/5b5e03a588cb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-026691": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a057ad05bea093d4f46407b93bd0d97f5f0b4004a2f1151b31de55e2e2a06fb7",
	                    "EndpointID": "fcb18d77a3fb8b23fa14a6b705857eb751c6627b4ea79dd2266f91e883971800",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "4a:a4:91:e6:b3:d5",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-026691",
	                        "acb25a518a85"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-026691 -n default-k8s-diff-port-026691
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-026691 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-026691 logs -n 25: (1.04987272s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-931243 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo containerd config dump                                                                                                                                                                                                  │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo crio config                                                                                                                                                                                                             │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ delete  │ -p bridge-931243                                                                                                                                                                                                                              │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ delete  │ -p disable-driver-mounts-435527                                                                                                                                                                                                               │ disable-driver-mounts-435527 │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ start   │ -p default-k8s-diff-port-026691 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-026691 │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:36 UTC │
	│ addons  │ enable dashboard -p no-preload-283677 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ start   │ -p no-preload-283677 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:35 UTC │
	│ addons  │ enable metrics-server -p embed-certs-719574 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ stop    │ -p embed-certs-719574 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ image   │ old-k8s-version-087235 image list --format=json                                                                                                                                                                                               │ old-k8s-version-087235       │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ pause   │ -p old-k8s-version-087235 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-087235       │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ delete  │ -p old-k8s-version-087235                                                                                                                                                                                                                     │ old-k8s-version-087235       │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ addons  │ enable dashboard -p embed-certs-719574 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ start   │ -p embed-certs-719574 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ delete  │ -p old-k8s-version-087235                                                                                                                                                                                                                     │ old-k8s-version-087235       │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ start   │ -p newest-cni-086099 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ image   │ no-preload-283677 image list --format=json                                                                                                                                                                                                    │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ pause   │ -p no-preload-283677 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ delete  │ -p no-preload-283677                                                                                                                                                                                                                          │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ delete  │ -p no-preload-283677                                                                                                                                                                                                                          │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-026691 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-026691 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:35:51
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:35:51.880635  378695 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:35:51.880972  378695 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:35:51.880985  378695 out.go:374] Setting ErrFile to fd 2...
	I1115 10:35:51.880990  378695 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:35:51.881260  378695 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 10:35:51.881819  378695 out.go:368] Setting JSON to false
	I1115 10:35:51.883178  378695 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8289,"bootTime":1763194663,"procs":305,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 10:35:51.883287  378695 start.go:143] virtualization: kvm guest
	I1115 10:35:51.885121  378695 out.go:179] * [newest-cni-086099] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 10:35:51.886362  378695 notify.go:221] Checking for updates...
	I1115 10:35:51.886418  378695 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 10:35:51.887691  378695 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:35:51.888785  378695 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:35:51.889883  378695 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-55448/.minikube
	I1115 10:35:51.891041  378695 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 10:35:51.895496  378695 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:35:51.897243  378695 config.go:182] Loaded profile config "default-k8s-diff-port-026691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:51.897400  378695 config.go:182] Loaded profile config "embed-certs-719574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:51.897562  378695 config.go:182] Loaded profile config "no-preload-283677": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:51.897686  378695 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:35:51.923206  378695 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 10:35:51.923309  378695 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:35:51.980066  378695 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:true NGoroutines:77 SystemTime:2025-11-15 10:35:51.97030866 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be removed
by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:35:51.980169  378695 docker.go:319] overlay module found
	I1115 10:35:51.982196  378695 out.go:179] * Using the docker driver based on user configuration
	I1115 10:35:51.983355  378695 start.go:309] selected driver: docker
	I1115 10:35:51.983369  378695 start.go:930] validating driver "docker" against <nil>
	I1115 10:35:51.983380  378695 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:35:51.984213  378695 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:35:52.044923  378695 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:76 SystemTime:2025-11-15 10:35:52.034876039 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:35:52.045179  378695 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1115 10:35:52.045216  378695 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1115 10:35:52.045457  378695 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1115 10:35:52.047189  378695 out.go:179] * Using Docker driver with root privileges
	I1115 10:35:52.048407  378695 cni.go:84] Creating CNI manager for ""
	I1115 10:35:52.048473  378695 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:35:52.048484  378695 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 10:35:52.048535  378695 start.go:353] cluster config:
	{Name:newest-cni-086099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-086099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:35:52.049826  378695 out.go:179] * Starting "newest-cni-086099" primary control-plane node in "newest-cni-086099" cluster
	I1115 10:35:52.050909  378695 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:35:52.052056  378695 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:35:52.053065  378695 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:35:52.053098  378695 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1115 10:35:52.053116  378695 cache.go:65] Caching tarball of preloaded images
	I1115 10:35:52.053151  378695 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:35:52.053229  378695 preload.go:238] Found /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 10:35:52.053246  378695 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:35:52.053398  378695 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/config.json ...
	I1115 10:35:52.053424  378695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/config.json: {Name:mkf8d02e5e19217377f4420029b0cc1adccada68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:52.074755  378695 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:35:52.074774  378695 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:35:52.074789  378695 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:35:52.074816  378695 start.go:360] acquireMachinesLock for newest-cni-086099: {Name:mk9065475199777f18a95aabcc9dbfda12f72647 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:35:52.074909  378695 start.go:364] duration metric: took 76.491µs to acquireMachinesLock for "newest-cni-086099"
	I1115 10:35:52.074932  378695 start.go:93] Provisioning new machine with config: &{Name:newest-cni-086099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-086099 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:35:52.075027  378695 start.go:125] createHost starting for "" (driver="docker")
	W1115 10:35:48.630700  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:50.630784  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	I1115 10:35:51.131341  368849 pod_ready.go:94] pod "coredns-66bc5c9577-66nkj" is "Ready"
	I1115 10:35:51.131376  368849 pod_ready.go:86] duration metric: took 41.005975825s for pod "coredns-66bc5c9577-66nkj" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:51.134231  368849 pod_ready.go:83] waiting for pod "etcd-no-preload-283677" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:51.138317  368849 pod_ready.go:94] pod "etcd-no-preload-283677" is "Ready"
	I1115 10:35:51.138345  368849 pod_ready.go:86] duration metric: took 4.088368ms for pod "etcd-no-preload-283677" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:51.140317  368849 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-283677" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:51.143990  368849 pod_ready.go:94] pod "kube-apiserver-no-preload-283677" is "Ready"
	I1115 10:35:51.144012  368849 pod_ready.go:86] duration metric: took 3.672536ms for pod "kube-apiserver-no-preload-283677" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:51.145780  368849 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-283677" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:51.329880  368849 pod_ready.go:94] pod "kube-controller-manager-no-preload-283677" is "Ready"
	I1115 10:35:51.329907  368849 pod_ready.go:86] duration metric: took 184.110671ms for pod "kube-controller-manager-no-preload-283677" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:51.529891  368849 pod_ready.go:83] waiting for pod "kube-proxy-vjbxg" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:51.929529  368849 pod_ready.go:94] pod "kube-proxy-vjbxg" is "Ready"
	I1115 10:35:51.929559  368849 pod_ready.go:86] duration metric: took 399.636424ms for pod "kube-proxy-vjbxg" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 10:35:49.488114  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	W1115 10:35:51.988145  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	I1115 10:35:52.129598  368849 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-283677" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:52.529568  368849 pod_ready.go:94] pod "kube-scheduler-no-preload-283677" is "Ready"
	I1115 10:35:52.529597  368849 pod_ready.go:86] duration metric: took 399.970584ms for pod "kube-scheduler-no-preload-283677" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:52.529608  368849 pod_ready.go:40] duration metric: took 42.409442772s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:35:52.581745  368849 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:35:52.583831  368849 out.go:179] * Done! kubectl is now configured to use "no-preload-283677" cluster and "default" namespace by default
	I1115 10:35:49.830432  377744 out.go:252] * Restarting existing docker container for "embed-certs-719574" ...
	I1115 10:35:49.830517  377744 cli_runner.go:164] Run: docker start embed-certs-719574
	I1115 10:35:50.114791  377744 cli_runner.go:164] Run: docker container inspect embed-certs-719574 --format={{.State.Status}}
	I1115 10:35:50.134754  377744 kic.go:430] container "embed-certs-719574" state is running.
	I1115 10:35:50.135204  377744 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-719574
	I1115 10:35:50.154606  377744 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/embed-certs-719574/config.json ...
	I1115 10:35:50.154928  377744 machine.go:94] provisionDockerMachine start ...
	I1115 10:35:50.155043  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:50.174749  377744 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:50.175176  377744 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1115 10:35:50.175216  377744 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:35:50.176012  377744 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38764->127.0.0.1:33119: read: connection reset by peer
	I1115 10:35:53.310173  377744 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-719574
	
	I1115 10:35:53.310214  377744 ubuntu.go:182] provisioning hostname "embed-certs-719574"
	I1115 10:35:53.310354  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:53.329392  377744 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:53.329615  377744 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1115 10:35:53.329634  377744 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-719574 && echo "embed-certs-719574" | sudo tee /etc/hostname
	I1115 10:35:53.472294  377744 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-719574
	
	I1115 10:35:53.472411  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:53.492862  377744 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:53.493213  377744 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1115 10:35:53.493264  377744 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-719574' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-719574/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-719574' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:35:53.625059  377744 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:35:53.625092  377744 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-55448/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-55448/.minikube}
	I1115 10:35:53.625126  377744 ubuntu.go:190] setting up certificates
	I1115 10:35:53.625143  377744 provision.go:84] configureAuth start
	I1115 10:35:53.625244  377744 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-719574
	I1115 10:35:53.644516  377744 provision.go:143] copyHostCerts
	I1115 10:35:53.644586  377744 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem, removing ...
	I1115 10:35:53.644598  377744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem
	I1115 10:35:53.644672  377744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem (1082 bytes)
	I1115 10:35:53.644781  377744 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem, removing ...
	I1115 10:35:53.644790  377744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem
	I1115 10:35:53.644816  377744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem (1123 bytes)
	I1115 10:35:53.644891  377744 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem, removing ...
	I1115 10:35:53.644898  377744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem
	I1115 10:35:53.644921  377744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem (1679 bytes)
	I1115 10:35:53.645022  377744 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem org=jenkins.embed-certs-719574 san=[127.0.0.1 192.168.94.2 embed-certs-719574 localhost minikube]
	I1115 10:35:53.893496  377744 provision.go:177] copyRemoteCerts
	I1115 10:35:53.893597  377744 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:35:53.893653  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:53.913597  377744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/embed-certs-719574/id_rsa Username:docker}
	I1115 10:35:54.011809  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:35:54.029841  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1115 10:35:54.048781  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1115 10:35:54.067015  377744 provision.go:87] duration metric: took 441.854991ms to configureAuth
	I1115 10:35:54.067059  377744 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:35:54.067256  377744 config.go:182] Loaded profile config "embed-certs-719574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:54.067376  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:54.087249  377744 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:54.087454  377744 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1115 10:35:54.087469  377744 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:35:54.383177  377744 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:35:54.383205  377744 machine.go:97] duration metric: took 4.228252503s to provisionDockerMachine
	I1115 10:35:54.383221  377744 start.go:293] postStartSetup for "embed-certs-719574" (driver="docker")
	I1115 10:35:54.383246  377744 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:35:54.383323  377744 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:35:54.383389  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:54.402613  377744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/embed-certs-719574/id_rsa Username:docker}
	I1115 10:35:54.497991  377744 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:35:54.501812  377744 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:35:54.501845  377744 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:35:54.501859  377744 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/addons for local assets ...
	I1115 10:35:54.501927  377744 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/files for local assets ...
	I1115 10:35:54.502073  377744 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem -> 589622.pem in /etc/ssl/certs
	I1115 10:35:54.502192  377744 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:35:54.510401  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:35:54.528845  377744 start.go:296] duration metric: took 145.608503ms for postStartSetup
	I1115 10:35:54.528929  377744 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:35:54.529033  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:54.548704  377744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/embed-certs-719574/id_rsa Username:docker}
	I1115 10:35:52.076936  378695 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 10:35:52.077138  378695 start.go:159] libmachine.API.Create for "newest-cni-086099" (driver="docker")
	I1115 10:35:52.077166  378695 client.go:173] LocalClient.Create starting
	I1115 10:35:52.077242  378695 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem
	I1115 10:35:52.077273  378695 main.go:143] libmachine: Decoding PEM data...
	I1115 10:35:52.077289  378695 main.go:143] libmachine: Parsing certificate...
	I1115 10:35:52.077346  378695 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem
	I1115 10:35:52.077364  378695 main.go:143] libmachine: Decoding PEM data...
	I1115 10:35:52.077373  378695 main.go:143] libmachine: Parsing certificate...
	I1115 10:35:52.077693  378695 cli_runner.go:164] Run: docker network inspect newest-cni-086099 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 10:35:52.094513  378695 cli_runner.go:211] docker network inspect newest-cni-086099 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 10:35:52.094577  378695 network_create.go:284] running [docker network inspect newest-cni-086099] to gather additional debugging logs...
	I1115 10:35:52.094597  378695 cli_runner.go:164] Run: docker network inspect newest-cni-086099
	W1115 10:35:52.112168  378695 cli_runner.go:211] docker network inspect newest-cni-086099 returned with exit code 1
	I1115 10:35:52.112212  378695 network_create.go:287] error running [docker network inspect newest-cni-086099]: docker network inspect newest-cni-086099: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-086099 not found
	I1115 10:35:52.112227  378695 network_create.go:289] output of [docker network inspect newest-cni-086099]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-086099 not found
	
	** /stderr **
	I1115 10:35:52.112312  378695 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:35:52.130531  378695 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-77644897380e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:42:f9:e2:83:c3:91} reservation:<nil>}
	I1115 10:35:52.131072  378695 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e808d03b2052 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:03:24:ef:92:a1} reservation:<nil>}
	I1115 10:35:52.131784  378695 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3fb45eeaa66e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:1a:b3:b7:0f:2e:99} reservation:<nil>}
	I1115 10:35:52.132406  378695 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-31f43b806931 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:0a:cc:8c:d8:0d:c5} reservation:<nil>}
	I1115 10:35:52.133098  378695 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-a057ad05bea0 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:4e:4d:10:e4:db:cb} reservation:<nil>}
	I1115 10:35:52.133911  378695 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-5402d8c1e78a IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:0a:f0:66:0a:22:a5} reservation:<nil>}
	I1115 10:35:52.134802  378695 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f61840}
	I1115 10:35:52.134825  378695 network_create.go:124] attempt to create docker network newest-cni-086099 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1115 10:35:52.134865  378695 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-086099 newest-cni-086099
	I1115 10:35:52.184306  378695 network_create.go:108] docker network newest-cni-086099 192.168.103.0/24 created
	I1115 10:35:52.184341  378695 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-086099" container
	I1115 10:35:52.184418  378695 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 10:35:52.204038  378695 cli_runner.go:164] Run: docker volume create newest-cni-086099 --label name.minikube.sigs.k8s.io=newest-cni-086099 --label created_by.minikube.sigs.k8s.io=true
	I1115 10:35:52.223076  378695 oci.go:103] Successfully created a docker volume newest-cni-086099
	I1115 10:35:52.223154  378695 cli_runner.go:164] Run: docker run --rm --name newest-cni-086099-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-086099 --entrypoint /usr/bin/test -v newest-cni-086099:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 10:35:52.620626  378695 oci.go:107] Successfully prepared a docker volume newest-cni-086099
	I1115 10:35:52.620689  378695 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:35:52.620707  378695 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 10:35:52.620778  378695 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-086099:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1115 10:35:54.641677  377744 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:35:54.646490  377744 fix.go:56] duration metric: took 4.836578375s for fixHost
	I1115 10:35:54.646531  377744 start.go:83] releasing machines lock for "embed-certs-719574", held for 4.836643994s
	I1115 10:35:54.646605  377744 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-719574
	I1115 10:35:54.665925  377744 ssh_runner.go:195] Run: cat /version.json
	I1115 10:35:54.666009  377744 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:35:54.666054  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:54.666061  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:54.685752  377744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/embed-certs-719574/id_rsa Username:docker}
	I1115 10:35:54.686933  377744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/embed-certs-719574/id_rsa Username:docker}
	I1115 10:35:54.832262  377744 ssh_runner.go:195] Run: systemctl --version
	I1115 10:35:54.839294  377744 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:35:54.881869  377744 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:35:54.887543  377744 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:35:54.887616  377744 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:35:54.897470  377744 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 10:35:54.897495  377744 start.go:496] detecting cgroup driver to use...
	I1115 10:35:54.897526  377744 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:35:54.897575  377744 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:35:54.915183  377744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:35:54.936918  377744 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:35:54.937042  377744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:35:54.959514  377744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:35:54.974364  377744 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:35:55.064629  377744 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:35:55.149431  377744 docker.go:234] disabling docker service ...
	I1115 10:35:55.149491  377744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:35:55.164826  377744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:35:55.178539  377744 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:35:55.258146  377744 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:35:55.336854  377744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:35:55.350099  377744 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:35:55.371361  377744 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:35:55.371428  377744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:55.392170  377744 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:35:55.392226  377744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:55.402091  377744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:55.464259  377744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:55.527554  377744 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:35:55.536601  377744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:55.581816  377744 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:55.591398  377744 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:55.656666  377744 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:35:55.665181  377744 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:35:55.673411  377744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:55.753200  377744 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:35:57.278236  377744 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.524976792s)
	I1115 10:35:57.278272  377744 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:35:57.278324  377744 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:35:57.282657  377744 start.go:564] Will wait 60s for crictl version
	I1115 10:35:57.282733  377744 ssh_runner.go:195] Run: which crictl
	I1115 10:35:57.286574  377744 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:35:57.314817  377744 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:35:57.314911  377744 ssh_runner.go:195] Run: crio --version
	I1115 10:35:57.343990  377744 ssh_runner.go:195] Run: crio --version
	I1115 10:35:57.373426  377744 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1115 10:35:54.488332  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	W1115 10:35:56.987904  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	I1115 10:35:57.378513  377744 cli_runner.go:164] Run: docker network inspect embed-certs-719574 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:35:57.402028  377744 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1115 10:35:57.409345  377744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:35:57.420512  377744 kubeadm.go:884] updating cluster {Name:embed-certs-719574 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-719574 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:35:57.420680  377744 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:35:57.420740  377744 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:35:57.458228  377744 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:35:57.458259  377744 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:35:57.458316  377744 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:35:57.485027  377744 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:35:57.485050  377744 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:35:57.485058  377744 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1115 10:35:57.485169  377744 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-719574 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-719574 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:35:57.485252  377744 ssh_runner.go:195] Run: crio config
	I1115 10:35:57.536095  377744 cni.go:84] Creating CNI manager for ""
	I1115 10:35:57.536127  377744 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:35:57.536147  377744 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:35:57.536177  377744 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-719574 NodeName:embed-certs-719574 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:35:57.536329  377744 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-719574"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:35:57.536407  377744 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:35:57.544702  377744 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:35:57.544775  377744 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:35:57.554019  377744 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1115 10:35:57.569040  377744 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:35:57.585285  377744 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1115 10:35:57.600345  377744 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:35:57.604627  377744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:35:57.619569  377744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:57.710162  377744 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:35:57.731269  377744 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/embed-certs-719574 for IP: 192.168.94.2
	I1115 10:35:57.731297  377744 certs.go:195] generating shared ca certs ...
	I1115 10:35:57.731319  377744 certs.go:227] acquiring lock for ca certs: {Name:mkec8c0ab2b564f4fafe1f7fc86089efa994f6db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:57.731508  377744 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key
	I1115 10:35:57.731564  377744 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key
	I1115 10:35:57.731581  377744 certs.go:257] generating profile certs ...
	I1115 10:35:57.731700  377744 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/embed-certs-719574/client.key
	I1115 10:35:57.731784  377744 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/embed-certs-719574/apiserver.key.788254b7
	I1115 10:35:57.731906  377744 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/embed-certs-719574/proxy-client.key
	I1115 10:35:57.732110  377744 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem (1338 bytes)
	W1115 10:35:57.732161  377744 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962_empty.pem, impossibly tiny 0 bytes
	I1115 10:35:57.732182  377744 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:35:57.732220  377744 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:35:57.732263  377744 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:35:57.732297  377744 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem (1679 bytes)
	I1115 10:35:57.732354  377744 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:35:57.733199  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:35:57.753928  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:35:57.776212  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:35:57.798569  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:35:57.855574  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/embed-certs-719574/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1115 10:35:57.881192  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/embed-certs-719574/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 10:35:57.958309  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/embed-certs-719574/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:35:57.978725  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/embed-certs-719574/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:35:58.001721  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:35:58.020846  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem --> /usr/share/ca-certificates/58962.pem (1338 bytes)
	I1115 10:35:58.039367  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /usr/share/ca-certificates/589622.pem (1708 bytes)
	I1115 10:35:58.064830  377744 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:35:58.080795  377744 ssh_runner.go:195] Run: openssl version
	I1115 10:35:58.087121  377744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:35:58.095754  377744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:35:58.099496  377744 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:41 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:35:58.099554  377744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:35:58.135273  377744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:35:58.145763  377744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/58962.pem && ln -fs /usr/share/ca-certificates/58962.pem /etc/ssl/certs/58962.pem"
	I1115 10:35:58.156943  377744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/58962.pem
	I1115 10:35:58.161920  377744 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:47 /usr/share/ca-certificates/58962.pem
	I1115 10:35:58.162041  377744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/58962.pem
	I1115 10:35:58.206129  377744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/58962.pem /etc/ssl/certs/51391683.0"
	I1115 10:35:58.214420  377744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/589622.pem && ln -fs /usr/share/ca-certificates/589622.pem /etc/ssl/certs/589622.pem"
	I1115 10:35:58.223061  377744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/589622.pem
	I1115 10:35:58.226827  377744 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:47 /usr/share/ca-certificates/589622.pem
	I1115 10:35:58.226872  377744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/589622.pem
	I1115 10:35:58.268503  377744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/589622.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:35:58.278233  377744 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:35:58.282629  377744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 10:35:58.349655  377744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 10:35:58.454042  377744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 10:35:58.576363  377744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 10:35:58.746644  377744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 10:35:58.782106  377744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 10:35:58.871080  377744 kubeadm.go:401] StartCluster: {Name:embed-certs-719574 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-719574 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:35:58.871213  377744 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:35:58.871280  377744 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:35:58.960244  377744 cri.go:89] found id: "34a183c86eaa1af613ae4708887513840319366cbb4179e7f1678698297eade2"
	I1115 10:35:58.960271  377744 cri.go:89] found id: "a04037d02e2f108f2bc0bc86b223e37c201372e3009a0b55923609e9c3f5b7b5"
	I1115 10:35:58.960278  377744 cri.go:89] found id: "d2523d5b7384ac5c1a3b025b86aa00148daa68b776dfada0c3e9b0dd751d444c"
	I1115 10:35:58.960283  377744 cri.go:89] found id: "56627175c47b813bf4460ca13a901ec3152cbb4d22f0362e40133db8b19b3e87"
	I1115 10:35:58.960298  377744 cri.go:89] found id: ""
	I1115 10:35:58.960336  377744 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 10:35:58.974645  377744 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:35:58Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:35:58.974767  377744 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:35:59.046786  377744 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 10:35:59.046808  377744 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 10:35:59.046859  377744 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 10:35:59.056636  377744 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:35:59.057549  377744 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-719574" does not appear in /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:35:59.058047  377744 kubeconfig.go:62] /home/jenkins/minikube-integration/21894-55448/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-719574" cluster setting kubeconfig missing "embed-certs-719574" context setting]
	I1115 10:35:59.058858  377744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:59.060778  377744 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 10:35:59.069779  377744 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1115 10:35:59.069815  377744 kubeadm.go:602] duration metric: took 22.998235ms to restartPrimaryControlPlane
	I1115 10:35:59.069826  377744 kubeadm.go:403] duration metric: took 198.758279ms to StartCluster
	I1115 10:35:59.069846  377744 settings.go:142] acquiring lock: {Name:mk94cfac0f6eef2479181468a7a8082a6e5c8f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:59.069922  377744 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:35:59.071492  377744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:59.071756  377744 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:35:59.071888  377744 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:35:59.072018  377744 config.go:182] Loaded profile config "embed-certs-719574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:59.072030  377744 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-719574"
	I1115 10:35:59.072050  377744 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-719574"
	W1115 10:35:59.072059  377744 addons.go:248] addon storage-provisioner should already be in state true
	I1115 10:35:59.072081  377744 addons.go:70] Setting dashboard=true in profile "embed-certs-719574"
	I1115 10:35:59.072126  377744 addons.go:239] Setting addon dashboard=true in "embed-certs-719574"
	W1115 10:35:59.072141  377744 addons.go:248] addon dashboard should already be in state true
	I1115 10:35:59.072091  377744 host.go:66] Checking if "embed-certs-719574" exists ...
	I1115 10:35:59.072176  377744 host.go:66] Checking if "embed-certs-719574" exists ...
	I1115 10:35:59.072082  377744 addons.go:70] Setting default-storageclass=true in profile "embed-certs-719574"
	I1115 10:35:59.072227  377744 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-719574"
	I1115 10:35:59.072560  377744 cli_runner.go:164] Run: docker container inspect embed-certs-719574 --format={{.State.Status}}
	I1115 10:35:59.072736  377744 cli_runner.go:164] Run: docker container inspect embed-certs-719574 --format={{.State.Status}}
	I1115 10:35:59.072775  377744 cli_runner.go:164] Run: docker container inspect embed-certs-719574 --format={{.State.Status}}
	I1115 10:35:59.073400  377744 out.go:179] * Verifying Kubernetes components...
	I1115 10:35:59.074646  377744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:59.097674  377744 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1115 10:35:59.097741  377744 addons.go:239] Setting addon default-storageclass=true in "embed-certs-719574"
	W1115 10:35:59.097755  377744 addons.go:248] addon default-storageclass should already be in state true
	I1115 10:35:59.097682  377744 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:35:59.097790  377744 host.go:66] Checking if "embed-certs-719574" exists ...
	I1115 10:35:59.098261  377744 cli_runner.go:164] Run: docker container inspect embed-certs-719574 --format={{.State.Status}}
	I1115 10:35:59.098922  377744 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:35:59.098988  377744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:35:59.099040  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:59.103435  377744 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1115 10:35:59.104647  377744 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1115 10:35:59.104679  377744 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1115 10:35:59.104749  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:59.119302  377744 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:35:59.119331  377744 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:35:59.119398  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:59.120171  377744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/embed-certs-719574/id_rsa Username:docker}
	I1115 10:35:59.125098  377744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/embed-certs-719574/id_rsa Username:docker}
	I1115 10:35:59.137515  377744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/embed-certs-719574/id_rsa Username:docker}
	I1115 10:35:59.461029  377744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:35:59.461402  377744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:35:59.465397  377744 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1115 10:35:59.465421  377744 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1115 10:35:59.550018  377744 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:35:59.557165  377744 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1115 10:35:59.557200  377744 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1115 10:35:57.180648  378695 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-086099:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.559815955s)
	I1115 10:35:57.180688  378695 kic.go:203] duration metric: took 4.559978988s to extract preloaded images to volume ...
	W1115 10:35:57.180808  378695 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1115 10:35:57.180907  378695 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 10:35:57.245170  378695 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-086099 --name newest-cni-086099 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-086099 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-086099 --network newest-cni-086099 --ip 192.168.103.2 --volume newest-cni-086099:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 10:35:57.553341  378695 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Running}}
	I1115 10:35:57.574001  378695 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:35:57.595723  378695 cli_runner.go:164] Run: docker exec newest-cni-086099 stat /var/lib/dpkg/alternatives/iptables
	I1115 10:35:57.648675  378695 oci.go:144] the created container "newest-cni-086099" has a running status.
	I1115 10:35:57.648711  378695 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa...
	I1115 10:35:57.758503  378695 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 10:35:57.788103  378695 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:35:57.813502  378695 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 10:35:57.813525  378695 kic_runner.go:114] Args: [docker exec --privileged newest-cni-086099 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 10:35:57.866879  378695 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:35:57.892578  378695 machine.go:94] provisionDockerMachine start ...
	I1115 10:35:57.892683  378695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:35:57.916142  378695 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:57.916445  378695 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1115 10:35:57.916463  378695 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:35:57.917246  378695 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47936->127.0.0.1:33124: read: connection reset by peer
	I1115 10:36:01.055800  378695 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-086099
	
	I1115 10:36:01.055829  378695 ubuntu.go:182] provisioning hostname "newest-cni-086099"
	I1115 10:36:01.055909  378695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:01.077686  378695 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:01.078023  378695 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1115 10:36:01.078042  378695 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-086099 && echo "newest-cni-086099" | sudo tee /etc/hostname
	I1115 10:36:01.223717  378695 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-086099
	
	I1115 10:36:01.223807  378695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:01.242452  378695 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:01.242668  378695 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1115 10:36:01.242685  378695 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-086099' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-086099/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-086099' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:36:01.376856  378695 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:36:01.376893  378695 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-55448/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-55448/.minikube}
	I1115 10:36:01.376932  378695 ubuntu.go:190] setting up certificates
	I1115 10:36:01.376976  378695 provision.go:84] configureAuth start
	I1115 10:36:01.377048  378695 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-086099
	I1115 10:36:01.398840  378695 provision.go:143] copyHostCerts
	I1115 10:36:01.398983  378695 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem, removing ...
	I1115 10:36:01.399002  378695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem
	I1115 10:36:01.399077  378695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem (1082 bytes)
	I1115 10:36:01.399173  378695 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem, removing ...
	I1115 10:36:01.399183  378695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem
	I1115 10:36:01.399217  378695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem (1123 bytes)
	I1115 10:36:01.399290  378695 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem, removing ...
	I1115 10:36:01.399300  378695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem
	I1115 10:36:01.399336  378695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem (1679 bytes)
	I1115 10:36:01.399416  378695 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem org=jenkins.newest-cni-086099 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-086099]
	I1115 10:36:01.599358  378695 provision.go:177] copyRemoteCerts
	I1115 10:36:01.599429  378695 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:36:01.599467  378695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:01.617920  378695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:01.714257  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1115 10:36:01.736832  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1115 10:36:01.771414  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:36:01.789744  378695 provision.go:87] duration metric: took 412.746889ms to configureAuth
	I1115 10:36:01.789780  378695 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:36:01.790004  378695 config.go:182] Loaded profile config "newest-cni-086099": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:01.790111  378695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:01.807644  378695 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:01.807895  378695 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1115 10:36:01.807913  378695 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1115 10:35:59.487887  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	W1115 10:36:01.488245  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	I1115 10:36:01.988676  367608 node_ready.go:49] node "default-k8s-diff-port-026691" is "Ready"
	I1115 10:36:01.988712  367608 node_ready.go:38] duration metric: took 40.004362414s for node "default-k8s-diff-port-026691" to be "Ready" ...
	I1115 10:36:01.988728  367608 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:36:01.988785  367608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:36:02.002727  367608 api_server.go:72] duration metric: took 41.048135621s to wait for apiserver process to appear ...
	I1115 10:36:02.002761  367608 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:36:02.002786  367608 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1115 10:36:02.007061  367608 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1115 10:36:02.008035  367608 api_server.go:141] control plane version: v1.34.1
	I1115 10:36:02.008064  367608 api_server.go:131] duration metric: took 5.294787ms to wait for apiserver health ...
	I1115 10:36:02.008076  367608 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:36:02.011683  367608 system_pods.go:59] 8 kube-system pods found
	I1115 10:36:02.011713  367608 system_pods.go:61] "coredns-66bc5c9577-5q2j4" [e6c4ca54-e0fe-45ee-88a6-33bdccbb876c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:36:02.011719  367608 system_pods.go:61] "etcd-default-k8s-diff-port-026691" [f8228efa-91a3-41fb-9952-3b4063ddf162] Running
	I1115 10:36:02.011725  367608 system_pods.go:61] "kindnet-hjdrk" [9e1f7579-f5a2-44cd-b77f-71219cd8827d] Running
	I1115 10:36:02.011729  367608 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-026691" [9da86b3b-bfcd-4c40-ac86-ee9d9faeb9b0] Running
	I1115 10:36:02.011732  367608 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-026691" [23d9be87-1de1-4c38-b65e-b084cf0fed25] Running
	I1115 10:36:02.011737  367608 system_pods.go:61] "kube-proxy-c5bw5" [ee48d34b-ae60-4a03-a7bd-df76e089eebb] Running
	I1115 10:36:02.011741  367608 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-026691" [52b3711f-015c-4af6-8672-58642cbc0c16] Running
	I1115 10:36:02.011747  367608 system_pods.go:61] "storage-provisioner" [7dedf7a9-415d-4260-b225-7ca171744768] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:36:02.011757  367608 system_pods.go:74] duration metric: took 3.675183ms to wait for pod list to return data ...
	I1115 10:36:02.011767  367608 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:36:02.014095  367608 default_sa.go:45] found service account: "default"
	I1115 10:36:02.014113  367608 default_sa.go:55] duration metric: took 2.338136ms for default service account to be created ...
	I1115 10:36:02.014121  367608 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:36:02.016619  367608 system_pods.go:86] 8 kube-system pods found
	I1115 10:36:02.016644  367608 system_pods.go:89] "coredns-66bc5c9577-5q2j4" [e6c4ca54-e0fe-45ee-88a6-33bdccbb876c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:36:02.016650  367608 system_pods.go:89] "etcd-default-k8s-diff-port-026691" [f8228efa-91a3-41fb-9952-3b4063ddf162] Running
	I1115 10:36:02.016657  367608 system_pods.go:89] "kindnet-hjdrk" [9e1f7579-f5a2-44cd-b77f-71219cd8827d] Running
	I1115 10:36:02.016663  367608 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-026691" [9da86b3b-bfcd-4c40-ac86-ee9d9faeb9b0] Running
	I1115 10:36:02.016668  367608 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-026691" [23d9be87-1de1-4c38-b65e-b084cf0fed25] Running
	I1115 10:36:02.016676  367608 system_pods.go:89] "kube-proxy-c5bw5" [ee48d34b-ae60-4a03-a7bd-df76e089eebb] Running
	I1115 10:36:02.016681  367608 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-026691" [52b3711f-015c-4af6-8672-58642cbc0c16] Running
	I1115 10:36:02.016692  367608 system_pods.go:89] "storage-provisioner" [7dedf7a9-415d-4260-b225-7ca171744768] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:36:02.016714  367608 retry.go:31] will retry after 218.810216ms: missing components: kube-dns
	I1115 10:36:02.239606  367608 system_pods.go:86] 8 kube-system pods found
	I1115 10:36:02.239636  367608 system_pods.go:89] "coredns-66bc5c9577-5q2j4" [e6c4ca54-e0fe-45ee-88a6-33bdccbb876c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:36:02.239642  367608 system_pods.go:89] "etcd-default-k8s-diff-port-026691" [f8228efa-91a3-41fb-9952-3b4063ddf162] Running
	I1115 10:36:02.239648  367608 system_pods.go:89] "kindnet-hjdrk" [9e1f7579-f5a2-44cd-b77f-71219cd8827d] Running
	I1115 10:36:02.239654  367608 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-026691" [9da86b3b-bfcd-4c40-ac86-ee9d9faeb9b0] Running
	I1115 10:36:02.239657  367608 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-026691" [23d9be87-1de1-4c38-b65e-b084cf0fed25] Running
	I1115 10:36:02.239661  367608 system_pods.go:89] "kube-proxy-c5bw5" [ee48d34b-ae60-4a03-a7bd-df76e089eebb] Running
	I1115 10:36:02.239665  367608 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-026691" [52b3711f-015c-4af6-8672-58642cbc0c16] Running
	I1115 10:36:02.239671  367608 system_pods.go:89] "storage-provisioner" [7dedf7a9-415d-4260-b225-7ca171744768] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:36:02.239689  367608 retry.go:31] will retry after 377.391978ms: missing components: kube-dns
	I1115 10:35:59.653179  377744 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1115 10:35:59.653211  377744 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1115 10:35:59.670277  377744 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1115 10:35:59.670303  377744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1115 10:35:59.757741  377744 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1115 10:35:59.757796  377744 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1115 10:35:59.771666  377744 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1115 10:35:59.771696  377744 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1115 10:35:59.844282  377744 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1115 10:35:59.844312  377744 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1115 10:35:59.859695  377744 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1115 10:35:59.859723  377744 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1115 10:35:59.873202  377744 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:35:59.873227  377744 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1115 10:35:59.887124  377744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:36:03.675772  377744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.21470192s)
	I1115 10:36:03.675861  377744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.214437385s)
	I1115 10:36:03.675941  377744 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.125882332s)
	I1115 10:36:03.676037  377744 node_ready.go:35] waiting up to 6m0s for node "embed-certs-719574" to be "Ready" ...
	I1115 10:36:03.676084  377744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.788916637s)
	I1115 10:36:03.677758  377744 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-719574 addons enable metrics-server
	
	I1115 10:36:03.686848  377744 node_ready.go:49] node "embed-certs-719574" is "Ready"
	I1115 10:36:03.686872  377744 node_ready.go:38] duration metric: took 10.779527ms for node "embed-certs-719574" to be "Ready" ...
	I1115 10:36:03.686888  377744 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:36:03.686937  377744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:36:03.688770  377744 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1115 10:36:02.108071  378695 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:36:02.108099  378695 machine.go:97] duration metric: took 4.215497724s to provisionDockerMachine
	I1115 10:36:02.108110  378695 client.go:176] duration metric: took 10.030938427s to LocalClient.Create
	I1115 10:36:02.108130  378695 start.go:167] duration metric: took 10.030994703s to libmachine.API.Create "newest-cni-086099"
	I1115 10:36:02.108137  378695 start.go:293] postStartSetup for "newest-cni-086099" (driver="docker")
	I1115 10:36:02.108146  378695 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:36:02.108214  378695 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:36:02.108252  378695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:02.126898  378695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:02.234226  378695 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:36:02.237991  378695 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:36:02.238025  378695 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:36:02.238037  378695 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/addons for local assets ...
	I1115 10:36:02.238104  378695 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/files for local assets ...
	I1115 10:36:02.238204  378695 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem -> 589622.pem in /etc/ssl/certs
	I1115 10:36:02.238321  378695 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:36:02.249461  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:36:02.279024  378695 start.go:296] duration metric: took 170.869278ms for postStartSetup
	I1115 10:36:02.279408  378695 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-086099
	I1115 10:36:02.299580  378695 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/config.json ...
	I1115 10:36:02.299869  378695 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:36:02.299927  378695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:02.318249  378695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:02.419697  378695 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:36:02.424780  378695 start.go:128] duration metric: took 10.349732709s to createHost
	I1115 10:36:02.424816  378695 start.go:83] releasing machines lock for "newest-cni-086099", held for 10.349888861s
	I1115 10:36:02.424894  378695 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-086099
	I1115 10:36:02.442707  378695 ssh_runner.go:195] Run: cat /version.json
	I1115 10:36:02.442769  378695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:02.442774  378695 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:36:02.442838  378695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:02.475405  378695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:02.476482  378695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:02.627684  378695 ssh_runner.go:195] Run: systemctl --version
	I1115 10:36:02.635318  378695 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:36:02.690380  378695 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:36:02.695343  378695 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:36:02.695404  378695 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:36:02.723025  378695 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1115 10:36:02.723047  378695 start.go:496] detecting cgroup driver to use...
	I1115 10:36:02.723077  378695 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:36:02.723116  378695 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:36:02.740027  378695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:36:02.757082  378695 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:36:02.757147  378695 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:36:02.780790  378695 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:36:02.800005  378695 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:36:02.903918  378695 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:36:03.008676  378695 docker.go:234] disabling docker service ...
	I1115 10:36:03.008735  378695 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:36:03.029417  378695 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:36:03.042351  378695 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:36:03.141887  378695 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:36:03.242543  378695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:36:03.261558  378695 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:36:03.281222  378695 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:36:03.281289  378695 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:03.292850  378695 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:36:03.292913  378695 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:03.302308  378695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:03.312080  378695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:03.321520  378695 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:36:03.330371  378695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:03.339342  378695 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:03.358403  378695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:03.370875  378695 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:36:03.382720  378695 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:36:03.392373  378695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:03.490238  378695 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:36:03.612676  378695 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:36:03.612751  378695 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:36:03.616844  378695 start.go:564] Will wait 60s for crictl version
	I1115 10:36:03.616906  378695 ssh_runner.go:195] Run: which crictl
	I1115 10:36:03.620519  378695 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:36:03.647994  378695 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:36:03.648098  378695 ssh_runner.go:195] Run: crio --version
	I1115 10:36:03.681466  378695 ssh_runner.go:195] Run: crio --version
	I1115 10:36:03.715909  378695 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:36:03.717677  378695 cli_runner.go:164] Run: docker network inspect newest-cni-086099 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:36:03.737236  378695 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1115 10:36:03.741562  378695 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:36:03.754243  378695 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1115 10:36:02.621370  367608 system_pods.go:86] 8 kube-system pods found
	I1115 10:36:02.621401  367608 system_pods.go:89] "coredns-66bc5c9577-5q2j4" [e6c4ca54-e0fe-45ee-88a6-33bdccbb876c] Running
	I1115 10:36:02.621407  367608 system_pods.go:89] "etcd-default-k8s-diff-port-026691" [f8228efa-91a3-41fb-9952-3b4063ddf162] Running
	I1115 10:36:02.621412  367608 system_pods.go:89] "kindnet-hjdrk" [9e1f7579-f5a2-44cd-b77f-71219cd8827d] Running
	I1115 10:36:02.621416  367608 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-026691" [9da86b3b-bfcd-4c40-ac86-ee9d9faeb9b0] Running
	I1115 10:36:02.621421  367608 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-026691" [23d9be87-1de1-4c38-b65e-b084cf0fed25] Running
	I1115 10:36:02.621424  367608 system_pods.go:89] "kube-proxy-c5bw5" [ee48d34b-ae60-4a03-a7bd-df76e089eebb] Running
	I1115 10:36:02.621428  367608 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-026691" [52b3711f-015c-4af6-8672-58642cbc0c16] Running
	I1115 10:36:02.621431  367608 system_pods.go:89] "storage-provisioner" [7dedf7a9-415d-4260-b225-7ca171744768] Running
	I1115 10:36:02.621439  367608 system_pods.go:126] duration metric: took 607.311685ms to wait for k8s-apps to be running ...
	I1115 10:36:02.621445  367608 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:36:02.621494  367608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:36:02.636245  367608 system_svc.go:56] duration metric: took 14.790396ms WaitForService to wait for kubelet
	I1115 10:36:02.636277  367608 kubeadm.go:587] duration metric: took 41.681692299s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:36:02.636317  367608 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:36:02.639743  367608 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:36:02.639770  367608 node_conditions.go:123] node cpu capacity is 8
	I1115 10:36:02.639786  367608 node_conditions.go:105] duration metric: took 3.46192ms to run NodePressure ...
	I1115 10:36:02.639802  367608 start.go:242] waiting for startup goroutines ...
	I1115 10:36:02.639815  367608 start.go:247] waiting for cluster config update ...
	I1115 10:36:02.639834  367608 start.go:256] writing updated cluster config ...
	I1115 10:36:02.640167  367608 ssh_runner.go:195] Run: rm -f paused
	I1115 10:36:02.644506  367608 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:36:02.649994  367608 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5q2j4" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:02.656679  367608 pod_ready.go:94] pod "coredns-66bc5c9577-5q2j4" is "Ready"
	I1115 10:36:02.656844  367608 pod_ready.go:86] duration metric: took 6.756741ms for pod "coredns-66bc5c9577-5q2j4" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:02.659798  367608 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:02.665415  367608 pod_ready.go:94] pod "etcd-default-k8s-diff-port-026691" is "Ready"
	I1115 10:36:02.665516  367608 pod_ready.go:86] duration metric: took 5.656754ms for pod "etcd-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:02.669115  367608 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:02.675621  367608 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-026691" is "Ready"
	I1115 10:36:02.675649  367608 pod_ready.go:86] duration metric: took 6.472611ms for pod "kube-apiserver-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:02.678236  367608 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:03.050408  367608 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-026691" is "Ready"
	I1115 10:36:03.050447  367608 pod_ready.go:86] duration metric: took 372.139168ms for pod "kube-controller-manager-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:03.250079  367608 pod_ready.go:83] waiting for pod "kube-proxy-c5bw5" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:03.649856  367608 pod_ready.go:94] pod "kube-proxy-c5bw5" is "Ready"
	I1115 10:36:03.649889  367608 pod_ready.go:86] duration metric: took 399.777083ms for pod "kube-proxy-c5bw5" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:03.850318  367608 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:04.249888  367608 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-026691" is "Ready"
	I1115 10:36:04.249914  367608 pod_ready.go:86] duration metric: took 399.564892ms for pod "kube-scheduler-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:04.249926  367608 pod_ready.go:40] duration metric: took 1.605379763s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:36:04.304218  367608 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:36:04.306183  367608 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-026691" cluster and "default" namespace by default
	I1115 10:36:03.689851  377744 addons.go:515] duration metric: took 4.61797682s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1115 10:36:03.700992  377744 api_server.go:72] duration metric: took 4.62919911s to wait for apiserver process to appear ...
	I1115 10:36:03.701014  377744 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:36:03.701034  377744 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1115 10:36:03.705295  377744 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1115 10:36:03.706367  377744 api_server.go:141] control plane version: v1.34.1
	I1115 10:36:03.706398  377744 api_server.go:131] duration metric: took 5.374158ms to wait for apiserver health ...
	I1115 10:36:03.706409  377744 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:36:03.710047  377744 system_pods.go:59] 8 kube-system pods found
	I1115 10:36:03.710083  377744 system_pods.go:61] "coredns-66bc5c9577-fjzk5" [d4d185bc-88ec-4edb-b250-6a59ee426bf5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:36:03.710095  377744 system_pods.go:61] "etcd-embed-certs-719574" [293cb467-4f15-417b-944e-457c3ac7e56f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:36:03.710106  377744 system_pods.go:61] "kindnet-ql2r4" [224e2951-6c97-449d-8ff8-f72aa6d36d60] Running
	I1115 10:36:03.710122  377744 system_pods.go:61] "kube-apiserver-embed-certs-719574" [43a83385-040c-470f-8242-5066617ac8d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:36:03.710135  377744 system_pods.go:61] "kube-controller-manager-embed-certs-719574" [78f16cd1-774e-4556-9db6-bca54bc8214b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:36:03.710141  377744 system_pods.go:61] "kube-proxy-kmc8c" [3534c76c-4b99-4b84-ba00-21d0d49e770f] Running
	I1115 10:36:03.710147  377744 system_pods.go:61] "kube-scheduler-embed-certs-719574" [9b8d2d78-442d-48cc-88be-29b9eb7017e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:36:03.710158  377744 system_pods.go:61] "storage-provisioner" [39c3baf2-24de-475e-aeef-a10825991ca3] Running
	I1115 10:36:03.710165  377744 system_pods.go:74] duration metric: took 3.749108ms to wait for pod list to return data ...
	I1115 10:36:03.710174  377744 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:36:03.712493  377744 default_sa.go:45] found service account: "default"
	I1115 10:36:03.712513  377744 default_sa.go:55] duration metric: took 2.331314ms for default service account to be created ...
	I1115 10:36:03.712522  377744 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:36:03.715355  377744 system_pods.go:86] 8 kube-system pods found
	I1115 10:36:03.715378  377744 system_pods.go:89] "coredns-66bc5c9577-fjzk5" [d4d185bc-88ec-4edb-b250-6a59ee426bf5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:36:03.715386  377744 system_pods.go:89] "etcd-embed-certs-719574" [293cb467-4f15-417b-944e-457c3ac7e56f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:36:03.715391  377744 system_pods.go:89] "kindnet-ql2r4" [224e2951-6c97-449d-8ff8-f72aa6d36d60] Running
	I1115 10:36:03.715398  377744 system_pods.go:89] "kube-apiserver-embed-certs-719574" [43a83385-040c-470f-8242-5066617ac8d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:36:03.715405  377744 system_pods.go:89] "kube-controller-manager-embed-certs-719574" [78f16cd1-774e-4556-9db6-bca54bc8214b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:36:03.715412  377744 system_pods.go:89] "kube-proxy-kmc8c" [3534c76c-4b99-4b84-ba00-21d0d49e770f] Running
	I1115 10:36:03.715417  377744 system_pods.go:89] "kube-scheduler-embed-certs-719574" [9b8d2d78-442d-48cc-88be-29b9eb7017e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:36:03.715427  377744 system_pods.go:89] "storage-provisioner" [39c3baf2-24de-475e-aeef-a10825991ca3] Running
	I1115 10:36:03.715435  377744 system_pods.go:126] duration metric: took 2.908753ms to wait for k8s-apps to be running ...
	I1115 10:36:03.715443  377744 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:36:03.715482  377744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:36:03.729079  377744 system_svc.go:56] duration metric: took 13.624714ms WaitForService to wait for kubelet
	I1115 10:36:03.729108  377744 kubeadm.go:587] duration metric: took 4.657317817s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:36:03.729130  377744 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:36:03.732380  377744 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:36:03.732409  377744 node_conditions.go:123] node cpu capacity is 8
	I1115 10:36:03.732424  377744 node_conditions.go:105] duration metric: took 3.288836ms to run NodePressure ...
	I1115 10:36:03.732439  377744 start.go:242] waiting for startup goroutines ...
	I1115 10:36:03.732448  377744 start.go:247] waiting for cluster config update ...
	I1115 10:36:03.732463  377744 start.go:256] writing updated cluster config ...
	I1115 10:36:03.732754  377744 ssh_runner.go:195] Run: rm -f paused
	I1115 10:36:03.737164  377744 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:36:03.740586  377744 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fjzk5" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:03.755299  378695 kubeadm.go:884] updating cluster {Name:newest-cni-086099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-086099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:36:03.755432  378695 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:36:03.755482  378695 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:36:03.794722  378695 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:36:03.794749  378695 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:36:03.794805  378695 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:36:03.826109  378695 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:36:03.826142  378695 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:36:03.826153  378695 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1115 10:36:03.826264  378695 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-086099 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-086099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:36:03.826354  378695 ssh_runner.go:195] Run: crio config
	I1115 10:36:03.879671  378695 cni.go:84] Creating CNI manager for ""
	I1115 10:36:03.879701  378695 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:36:03.879717  378695 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1115 10:36:03.879739  378695 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-086099 NodeName:newest-cni-086099 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:36:03.879883  378695 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-086099"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:36:03.879988  378695 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:36:03.888992  378695 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:36:03.889052  378695 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:36:03.897294  378695 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1115 10:36:03.911151  378695 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:36:03.930297  378695 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1115 10:36:03.945072  378695 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:36:03.948706  378695 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:36:03.959243  378695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:04.058938  378695 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:36:04.093857  378695 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099 for IP: 192.168.103.2
	I1115 10:36:04.093888  378695 certs.go:195] generating shared ca certs ...
	I1115 10:36:04.093909  378695 certs.go:227] acquiring lock for ca certs: {Name:mkec8c0ab2b564f4fafe1f7fc86089efa994f6db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:04.094076  378695 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key
	I1115 10:36:04.094148  378695 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key
	I1115 10:36:04.094163  378695 certs.go:257] generating profile certs ...
	I1115 10:36:04.094230  378695 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/client.key
	I1115 10:36:04.094258  378695 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/client.crt with IP's: []
	I1115 10:36:04.385453  378695 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/client.crt ...
	I1115 10:36:04.385478  378695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/client.crt: {Name:mk40f6a053043aca087e720d3a4da44f4215e456 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:04.385623  378695 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/client.key ...
	I1115 10:36:04.385633  378695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/client.key: {Name:mk7ba7a9aed87498b12d0ea82f1fd16a2802adbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:04.385729  378695 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.key.d719cdad
	I1115 10:36:04.385749  378695 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.crt.d719cdad with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1115 10:36:04.782829  378695 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.crt.d719cdad ...
	I1115 10:36:04.782863  378695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.crt.d719cdad: {Name:mkcdec4fb6d5949c6190ac10a0f9caeb369ef1df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:04.783103  378695 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.key.d719cdad ...
	I1115 10:36:04.783129  378695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.key.d719cdad: {Name:mk74203e2c301a3a488fc95324a401039fa8106d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:04.783253  378695 certs.go:382] copying /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.crt.d719cdad -> /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.crt
	I1115 10:36:04.783373  378695 certs.go:386] copying /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.key.d719cdad -> /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.key
	I1115 10:36:04.783463  378695 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.key
	I1115 10:36:04.783486  378695 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.crt with IP's: []
	I1115 10:36:04.900301  378695 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.crt ...
	I1115 10:36:04.900329  378695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.crt: {Name:mk0d5b4842614d84db6a4d32b9e40b0ee2961026 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:04.900527  378695 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.key ...
	I1115 10:36:04.900547  378695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.key: {Name:mkc0cf01fd3204cf2eb33c45d49bdb1a3af7d389 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:04.900769  378695 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem (1338 bytes)
	W1115 10:36:04.900806  378695 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962_empty.pem, impossibly tiny 0 bytes
	I1115 10:36:04.900817  378695 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:36:04.900837  378695 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:36:04.900863  378695 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:36:04.900884  378695 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem (1679 bytes)
	I1115 10:36:04.900931  378695 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:36:04.901498  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:36:04.920490  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:36:04.938524  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:36:04.956167  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:36:04.974935  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1115 10:36:04.995270  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 10:36:05.016110  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:36:05.034440  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:36:05.051948  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem --> /usr/share/ca-certificates/58962.pem (1338 bytes)
	I1115 10:36:05.071136  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /usr/share/ca-certificates/589622.pem (1708 bytes)
	I1115 10:36:05.100067  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:36:05.120144  378695 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:36:05.133751  378695 ssh_runner.go:195] Run: openssl version
	I1115 10:36:05.140442  378695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:36:05.150520  378695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:05.155339  378695 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:41 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:05.155411  378695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:05.205520  378695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:36:05.214306  378695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/58962.pem && ln -fs /usr/share/ca-certificates/58962.pem /etc/ssl/certs/58962.pem"
	I1115 10:36:05.222589  378695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/58962.pem
	I1115 10:36:05.226661  378695 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:47 /usr/share/ca-certificates/58962.pem
	I1115 10:36:05.226723  378695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/58962.pem
	I1115 10:36:05.269094  378695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/58962.pem /etc/ssl/certs/51391683.0"
	I1115 10:36:05.282750  378695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/589622.pem && ln -fs /usr/share/ca-certificates/589622.pem /etc/ssl/certs/589622.pem"
	I1115 10:36:05.291785  378695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/589622.pem
	I1115 10:36:05.295742  378695 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:47 /usr/share/ca-certificates/589622.pem
	I1115 10:36:05.295801  378695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/589622.pem
	I1115 10:36:05.341059  378695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/589622.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:36:05.352931  378695 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:36:05.357729  378695 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 10:36:05.357794  378695 kubeadm.go:401] StartCluster: {Name:newest-cni-086099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-086099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:36:05.357898  378695 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:36:05.358038  378695 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:36:05.389342  378695 cri.go:89] found id: ""
	I1115 10:36:05.389409  378695 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:36:05.399176  378695 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 10:36:05.407568  378695 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 10:36:05.407619  378695 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 10:36:05.415732  378695 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 10:36:05.415750  378695 kubeadm.go:158] found existing configuration files:
	
	I1115 10:36:05.415789  378695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 10:36:05.423933  378695 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 10:36:05.424003  378695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 10:36:05.431425  378695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 10:36:05.439333  378695 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 10:36:05.439396  378695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 10:36:05.446777  378695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 10:36:05.454437  378695 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 10:36:05.454481  378695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 10:36:05.461644  378695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 10:36:05.468875  378695 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 10:36:05.468937  378695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 10:36:05.476821  378695 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 10:36:05.516431  378695 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 10:36:05.516536  378695 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 10:36:05.536153  378695 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 10:36:05.536251  378695 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1115 10:36:05.536322  378695 kubeadm.go:319] OS: Linux
	I1115 10:36:05.536373  378695 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 10:36:05.536430  378695 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1115 10:36:05.536519  378695 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 10:36:05.536598  378695 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 10:36:05.536682  378695 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 10:36:05.536769  378695 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 10:36:05.536832  378695 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 10:36:05.536877  378695 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 10:36:05.536920  378695 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1115 10:36:05.598690  378695 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 10:36:05.598871  378695 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 10:36:05.599041  378695 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 10:36:05.606076  378695 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1115 10:36:05.608588  378695 out.go:252]   - Generating certificates and keys ...
	I1115 10:36:05.608685  378695 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 10:36:05.608773  378695 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 10:36:06.648403  378695 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 10:36:06.817549  378695 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	W1115 10:36:05.746906  377744 pod_ready.go:104] pod "coredns-66bc5c9577-fjzk5" is not "Ready", error: <nil>
	W1115 10:36:07.750717  377744 pod_ready.go:104] pod "coredns-66bc5c9577-fjzk5" is not "Ready", error: <nil>
	I1115 10:36:07.421389  378695 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 10:36:07.530169  378695 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 10:36:07.661595  378695 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 10:36:07.661935  378695 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-086099] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1115 10:36:07.815844  378695 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 10:36:07.815984  378695 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-086099] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1115 10:36:08.340480  378695 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 10:36:08.581150  378695 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 10:36:08.685187  378695 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 10:36:08.685316  378695 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 10:36:09.142759  378695 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 10:36:09.525800  378695 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 10:36:10.064453  378695 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 10:36:10.611944  378695 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 10:36:10.725282  378695 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 10:36:10.726089  378695 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 10:36:10.732368  378695 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1115 10:36:10.733696  378695 out.go:252]   - Booting up control plane ...
	I1115 10:36:10.733914  378695 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 10:36:10.734036  378695 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 10:36:10.734647  378695 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 10:36:10.751182  378695 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 10:36:10.751353  378695 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1115 10:36:10.758855  378695 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1115 10:36:10.759149  378695 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 10:36:10.759248  378695 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 10:36:10.861925  378695 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1115 10:36:10.862096  378695 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1115 10:36:11.863287  378695 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00142158s
	I1115 10:36:11.866873  378695 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 10:36:11.867055  378695 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1115 10:36:11.867227  378695 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 10:36:11.867334  378695 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1115 10:36:10.247511  377744 pod_ready.go:104] pod "coredns-66bc5c9577-fjzk5" is not "Ready", error: <nil>
	W1115 10:36:12.252752  377744 pod_ready.go:104] pod "coredns-66bc5c9577-fjzk5" is not "Ready", error: <nil>
	W1115 10:36:14.260890  377744 pod_ready.go:104] pod "coredns-66bc5c9577-fjzk5" is not "Ready", error: <nil>
	I1115 10:36:15.556581  378695 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.689495085s
	I1115 10:36:15.951246  378695 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.084319454s
	
	
	==> CRI-O <==
	Nov 15 10:36:01 default-k8s-diff-port-026691 crio[896]: time="2025-11-15T10:36:01.915740212Z" level=info msg="Created container 5c083ec1dee7e4b21d863a284a47c54f8f6c2ec47874856ab5050d573e56e227: kube-system/coredns-66bc5c9577-5q2j4/coredns" id=8880b451-2dd1-4a98-8c01-e0076cea4bea name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:36:01 default-k8s-diff-port-026691 crio[896]: time="2025-11-15T10:36:01.916326453Z" level=info msg="Starting container: 5c083ec1dee7e4b21d863a284a47c54f8f6c2ec47874856ab5050d573e56e227" id=d4a89055-40fc-4734-9547-efe0f127dc8e name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:36:01 default-k8s-diff-port-026691 crio[896]: time="2025-11-15T10:36:01.920318281Z" level=info msg="Started container" PID=1988 containerID=5c083ec1dee7e4b21d863a284a47c54f8f6c2ec47874856ab5050d573e56e227 description=kube-system/coredns-66bc5c9577-5q2j4/coredns id=d4a89055-40fc-4734-9547-efe0f127dc8e name=/runtime.v1.RuntimeService/StartContainer sandboxID=edcdfac6e7ae8df90e08aea5e0fe925ddc60f8b52c7c0f35897b341e1d655ffe
	Nov 15 10:36:04 default-k8s-diff-port-026691 crio[896]: time="2025-11-15T10:36:04.809137927Z" level=info msg="Running pod sandbox: default/busybox/POD" id=c970d34d-26c1-4fdb-90c9-d5ae1647cd55 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:36:04 default-k8s-diff-port-026691 crio[896]: time="2025-11-15T10:36:04.809241311Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:04 default-k8s-diff-port-026691 crio[896]: time="2025-11-15T10:36:04.817116833Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:849926d34b41ada37431cbadf53528116129ed3d262c77c57fc26aa31edb6294 UID:8e2e3c26-b883-4c84-b07b-e107e5b36bbc NetNS:/var/run/netns/44d6e870-1b18-4be0-a596-3cd7e62c809f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b030}] Aliases:map[]}"
	Nov 15 10:36:04 default-k8s-diff-port-026691 crio[896]: time="2025-11-15T10:36:04.817158147Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 15 10:36:04 default-k8s-diff-port-026691 crio[896]: time="2025-11-15T10:36:04.828918442Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:849926d34b41ada37431cbadf53528116129ed3d262c77c57fc26aa31edb6294 UID:8e2e3c26-b883-4c84-b07b-e107e5b36bbc NetNS:/var/run/netns/44d6e870-1b18-4be0-a596-3cd7e62c809f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b030}] Aliases:map[]}"
	Nov 15 10:36:04 default-k8s-diff-port-026691 crio[896]: time="2025-11-15T10:36:04.829133934Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 15 10:36:04 default-k8s-diff-port-026691 crio[896]: time="2025-11-15T10:36:04.832094874Z" level=info msg="Ran pod sandbox 849926d34b41ada37431cbadf53528116129ed3d262c77c57fc26aa31edb6294 with infra container: default/busybox/POD" id=c970d34d-26c1-4fdb-90c9-d5ae1647cd55 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:36:04 default-k8s-diff-port-026691 crio[896]: time="2025-11-15T10:36:04.833324166Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6e963faf-c590-4ff6-b6b0-5229c2c96af9 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:36:04 default-k8s-diff-port-026691 crio[896]: time="2025-11-15T10:36:04.833468373Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=6e963faf-c590-4ff6-b6b0-5229c2c96af9 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:36:04 default-k8s-diff-port-026691 crio[896]: time="2025-11-15T10:36:04.833519593Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=6e963faf-c590-4ff6-b6b0-5229c2c96af9 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:36:04 default-k8s-diff-port-026691 crio[896]: time="2025-11-15T10:36:04.834371258Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=26a1ee76-4513-4b19-8fae-7131ab7b5d5a name=/runtime.v1.ImageService/PullImage
	Nov 15 10:36:04 default-k8s-diff-port-026691 crio[896]: time="2025-11-15T10:36:04.836232948Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 15 10:36:09 default-k8s-diff-port-026691 crio[896]: time="2025-11-15T10:36:09.471162058Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=26a1ee76-4513-4b19-8fae-7131ab7b5d5a name=/runtime.v1.ImageService/PullImage
	Nov 15 10:36:09 default-k8s-diff-port-026691 crio[896]: time="2025-11-15T10:36:09.472165637Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8454ee31-e779-43b5-bb76-9f88dfad309b name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:36:09 default-k8s-diff-port-026691 crio[896]: time="2025-11-15T10:36:09.473971005Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a4fde740-3c2e-49a1-acd7-0e051ef09e63 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:36:09 default-k8s-diff-port-026691 crio[896]: time="2025-11-15T10:36:09.478864905Z" level=info msg="Creating container: default/busybox/busybox" id=85bfe990-992b-4433-bf83-af7893ca9c16 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:36:09 default-k8s-diff-port-026691 crio[896]: time="2025-11-15T10:36:09.479036914Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:09 default-k8s-diff-port-026691 crio[896]: time="2025-11-15T10:36:09.486594947Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:09 default-k8s-diff-port-026691 crio[896]: time="2025-11-15T10:36:09.48754841Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:09 default-k8s-diff-port-026691 crio[896]: time="2025-11-15T10:36:09.514266529Z" level=info msg="Created container 635b22406d85678c7eadc78fa846563577baeb5804eeb35df1ede006abcb8f58: default/busybox/busybox" id=85bfe990-992b-4433-bf83-af7893ca9c16 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:36:09 default-k8s-diff-port-026691 crio[896]: time="2025-11-15T10:36:09.515151249Z" level=info msg="Starting container: 635b22406d85678c7eadc78fa846563577baeb5804eeb35df1ede006abcb8f58" id=db4c3958-df74-42b2-b213-928e7b64f1df name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:36:09 default-k8s-diff-port-026691 crio[896]: time="2025-11-15T10:36:09.518602678Z" level=info msg="Started container" PID=2061 containerID=635b22406d85678c7eadc78fa846563577baeb5804eeb35df1ede006abcb8f58 description=default/busybox/busybox id=db4c3958-df74-42b2-b213-928e7b64f1df name=/runtime.v1.RuntimeService/StartContainer sandboxID=849926d34b41ada37431cbadf53528116129ed3d262c77c57fc26aa31edb6294
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	635b22406d856       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago        Running             busybox                   0                   849926d34b41a       busybox                                                default
	5c083ec1dee7e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      16 seconds ago       Running             coredns                   0                   edcdfac6e7ae8       coredns-66bc5c9577-5q2j4                               kube-system
	d3c681e2b21c3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      16 seconds ago       Running             storage-provisioner       0                   f61b3f9defa4d       storage-provisioner                                    kube-system
	551cb7671967a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      57 seconds ago       Running             kube-proxy                0                   07c701352514c       kube-proxy-c5bw5                                       kube-system
	0b98322c65a7f       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      57 seconds ago       Running             kindnet-cni               0                   89e181a537fdd       kindnet-hjdrk                                          kube-system
	a0e319e0a134d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      About a minute ago   Running             kube-controller-manager   0                   065f4755b5165       kube-controller-manager-default-k8s-diff-port-026691   kube-system
	f777104202c39       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      About a minute ago   Running             kube-apiserver            0                   9136d629f7374       kube-apiserver-default-k8s-diff-port-026691            kube-system
	f436d1c59b9aa       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      About a minute ago   Running             etcd                      0                   d6aa6e5560467       etcd-default-k8s-diff-port-026691                      kube-system
	54e728cef9b5e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      About a minute ago   Running             kube-scheduler            0                   489082e67dc1f       kube-scheduler-default-k8s-diff-port-026691            kube-system
	
	
	==> coredns [5c083ec1dee7e4b21d863a284a47c54f8f6c2ec47874856ab5050d573e56e227] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39780 - 4862 "HINFO IN 6131144918457599503.4582092392421485323. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.017542483s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-026691
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-026691
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=default-k8s-diff-port-026691
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_35_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:35:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-026691
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:36:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:36:16 +0000   Sat, 15 Nov 2025 10:35:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:36:16 +0000   Sat, 15 Nov 2025 10:35:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:36:16 +0000   Sat, 15 Nov 2025 10:35:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:36:16 +0000   Sat, 15 Nov 2025 10:36:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-026691
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                cb07002a-423d-4a10-9a8e-bf05fe259209
	  Boot ID:                    6a33f504-3a8c-4d49-8fc5-301cf5275efc
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s
	  kube-system                 coredns-66bc5c9577-5q2j4                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     58s
	  kube-system                 etcd-default-k8s-diff-port-026691                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         63s
	  kube-system                 kindnet-hjdrk                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      58s
	  kube-system                 kube-apiserver-default-k8s-diff-port-026691             250m (3%)     0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-026691    200m (2%)     0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-proxy-c5bw5                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 kube-scheduler-default-k8s-diff-port-026691             100m (1%)     0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 56s                kube-proxy       
	  Normal   Starting                 70s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 70s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  69s (x8 over 70s)  kubelet          Node default-k8s-diff-port-026691 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    69s (x8 over 70s)  kubelet          Node default-k8s-diff-port-026691 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     69s (x8 over 70s)  kubelet          Node default-k8s-diff-port-026691 status is now: NodeHasSufficientPID
	  Normal   Starting                 63s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  63s                kubelet          Node default-k8s-diff-port-026691 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s                kubelet          Node default-k8s-diff-port-026691 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s                kubelet          Node default-k8s-diff-port-026691 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           59s                node-controller  Node default-k8s-diff-port-026691 event: Registered Node default-k8s-diff-port-026691 in Controller
	  Normal   NodeReady                17s                kubelet          Node default-k8s-diff-port-026691 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 7f 53 f9 15 93 08 06
	[  +0.017821] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff c6 66 25 e9 45 28 08 06
	[ +19.178294] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 a3 65 b9 28 15 08 06
	[  +0.347929] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 6d df 33 a5 ad 08 06
	[  +0.000319] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff a2 7f 53 f9 15 93 08 06
	[Nov15 10:33] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 96 ed 10 e8 9a 80 08 06
	[  +0.000481] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 a3 65 b9 28 15 08 06
	[ +35.214217] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 c2 9a 56 52 0c 08 06
	[  +9.104720] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 c2 c4 d1 7c dd 08 06
	[Nov15 10:34] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 68 1e 80 53 05 08 06
	[  +0.000444] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 c2 c4 d1 7c dd 08 06
	[ +18.836046] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 98 db 78 4f 0b 08 06
	[  +0.000708] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 c2 9a 56 52 0c 08 06
	
	
	==> etcd [f436d1c59b9aa7e7de013c4f14ff334c591747eceb21a1af9f863b722b4262a5] <==
	{"level":"warn","ts":"2025-11-15T10:35:11.732855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:11.740706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:11.747712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:11.755062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:11.761996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:11.769148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:11.775561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:11.782898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:11.827293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:11.836003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:11.842069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:11.848250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:11.856654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:11.863069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:11.869657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:11.875688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:11.929455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:11.936063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:11.944895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:11.950865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:11.968096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:11.974671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:11.982028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:12.062509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51314","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-15T10:35:56.130587Z","caller":"traceutil/trace.go:172","msg":"trace[2050415892] transaction","detail":"{read_only:false; response_revision:445; number_of_response:1; }","duration":"134.341852ms","start":"2025-11-15T10:35:55.996228Z","end":"2025-11-15T10:35:56.130569Z","steps":["trace[2050415892] 'process raft request'  (duration: 134.218719ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:36:18 up  2:18,  0 user,  load average: 4.21, 4.40, 2.81
	Linux default-k8s-diff-port-026691 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0b98322c65a7fc9f1bcbec42de301ba4e48d937e837a783125a839f925d82c0c] <==
	I1115 10:35:21.023465       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:35:21.024625       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1115 10:35:21.024802       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:35:21.024829       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:35:21.024871       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:35:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:35:21.328091       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:35:21.328500       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:35:21.328522       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:35:21.328738       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1115 10:35:51.328149       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1115 10:35:51.328170       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1115 10:35:51.328321       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1115 10:35:51.329034       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1115 10:35:52.728717       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:35:52.728754       1 metrics.go:72] Registering metrics
	I1115 10:35:52.728826       1 controller.go:711] "Syncing nftables rules"
	I1115 10:36:01.334034       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 10:36:01.334087       1 main.go:301] handling current node
	I1115 10:36:11.327537       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 10:36:11.327574       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f777104202c390d29a082cf5f5ee6f9553f200076f0cf138c174f96308be1fbb] <==
	I1115 10:35:12.732822       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 10:35:12.732834       1 cache.go:39] Caches are synced for autoregister controller
	I1115 10:35:12.735042       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1115 10:35:12.739020       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1115 10:35:12.743608       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 10:35:12.744636       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:35:12.931124       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 10:35:13.600786       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1115 10:35:13.605366       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1115 10:35:13.605389       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:35:14.269778       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:35:14.320205       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:35:14.431931       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1115 10:35:14.440622       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1115 10:35:14.441845       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:35:14.447074       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 10:35:14.657035       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 10:35:15.190189       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 10:35:15.243688       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1115 10:35:15.255306       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1115 10:35:19.808938       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 10:35:20.357779       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1115 10:35:20.710354       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:35:20.715828       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1115 10:36:16.610381       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:48938: use of closed network connection
	
	
	==> kube-controller-manager [a0e319e0a134db6d217d8d2b647adcf0bed749e9399d212315d18e32f4c9a4f1] <==
	I1115 10:35:19.631636       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1115 10:35:19.632356       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1115 10:35:19.632931       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1115 10:35:19.634147       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1115 10:35:19.634178       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1115 10:35:19.641358       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1115 10:35:19.646805       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1115 10:35:19.653235       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1115 10:35:19.653329       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 10:35:19.654554       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1115 10:35:19.654576       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1115 10:35:19.654598       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1115 10:35:19.654707       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1115 10:35:19.654732       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 10:35:19.654789       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1115 10:35:19.654798       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1115 10:35:19.654936       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 10:35:19.655040       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1115 10:35:19.655230       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1115 10:35:19.655940       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 10:35:19.660240       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:35:19.660256       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1115 10:35:19.663610       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:35:19.681848       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:36:04.598759       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [551cb7671967aff30b23e270f4a08a899f009b67b4053b4c2a97771a1c6da57c] <==
	I1115 10:35:20.780667       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:35:20.927764       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:35:21.031053       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:35:21.032565       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1115 10:35:21.042083       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:35:21.140462       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:35:21.140538       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:35:21.146513       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:35:21.146898       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:35:21.146935       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:35:21.148388       1 config.go:200] "Starting service config controller"
	I1115 10:35:21.148408       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:35:21.148434       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:35:21.148439       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:35:21.148453       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:35:21.148458       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:35:21.148848       1 config.go:309] "Starting node config controller"
	I1115 10:35:21.148996       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:35:21.149043       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:35:21.248554       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 10:35:21.248563       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 10:35:21.248584       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [54e728cef9b5e04624cce43b87151ea1580fa7f8a8d68800bd869ba5b2b65494] <==
	E1115 10:35:12.743655       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 10:35:12.743799       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 10:35:12.743886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 10:35:12.743981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 10:35:12.744712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 10:35:12.744770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 10:35:12.744770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 10:35:12.743306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 10:35:13.620781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 10:35:13.638140       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 10:35:13.646470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 10:35:13.693327       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 10:35:13.720011       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 10:35:13.753880       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 10:35:13.780034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 10:35:13.855432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 10:35:13.872006       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 10:35:13.898630       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1115 10:35:13.907301       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 10:35:13.944177       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 10:35:13.947362       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 10:35:13.992070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 10:35:14.034364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 10:35:14.059272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1115 10:35:15.933715       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 10:35:19 default-k8s-diff-port-026691 kubelet[1460]: I1115 10:35:19.725685    1460 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 15 10:35:19 default-k8s-diff-port-026691 kubelet[1460]: I1115 10:35:19.726510    1460 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 15 10:35:20 default-k8s-diff-port-026691 kubelet[1460]: I1115 10:35:20.455087    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5tq5\" (UniqueName: \"kubernetes.io/projected/ee48d34b-ae60-4a03-a7bd-df76e089eebb-kube-api-access-z5tq5\") pod \"kube-proxy-c5bw5\" (UID: \"ee48d34b-ae60-4a03-a7bd-df76e089eebb\") " pod="kube-system/kube-proxy-c5bw5"
	Nov 15 10:35:20 default-k8s-diff-port-026691 kubelet[1460]: I1115 10:35:20.455150    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9e1f7579-f5a2-44cd-b77f-71219cd8827d-cni-cfg\") pod \"kindnet-hjdrk\" (UID: \"9e1f7579-f5a2-44cd-b77f-71219cd8827d\") " pod="kube-system/kindnet-hjdrk"
	Nov 15 10:35:20 default-k8s-diff-port-026691 kubelet[1460]: I1115 10:35:20.455185    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee48d34b-ae60-4a03-a7bd-df76e089eebb-xtables-lock\") pod \"kube-proxy-c5bw5\" (UID: \"ee48d34b-ae60-4a03-a7bd-df76e089eebb\") " pod="kube-system/kube-proxy-c5bw5"
	Nov 15 10:35:20 default-k8s-diff-port-026691 kubelet[1460]: I1115 10:35:20.455211    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e1f7579-f5a2-44cd-b77f-71219cd8827d-xtables-lock\") pod \"kindnet-hjdrk\" (UID: \"9e1f7579-f5a2-44cd-b77f-71219cd8827d\") " pod="kube-system/kindnet-hjdrk"
	Nov 15 10:35:20 default-k8s-diff-port-026691 kubelet[1460]: I1115 10:35:20.455272    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ee48d34b-ae60-4a03-a7bd-df76e089eebb-kube-proxy\") pod \"kube-proxy-c5bw5\" (UID: \"ee48d34b-ae60-4a03-a7bd-df76e089eebb\") " pod="kube-system/kube-proxy-c5bw5"
	Nov 15 10:35:20 default-k8s-diff-port-026691 kubelet[1460]: I1115 10:35:20.455323    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee48d34b-ae60-4a03-a7bd-df76e089eebb-lib-modules\") pod \"kube-proxy-c5bw5\" (UID: \"ee48d34b-ae60-4a03-a7bd-df76e089eebb\") " pod="kube-system/kube-proxy-c5bw5"
	Nov 15 10:35:20 default-k8s-diff-port-026691 kubelet[1460]: I1115 10:35:20.455346    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e1f7579-f5a2-44cd-b77f-71219cd8827d-lib-modules\") pod \"kindnet-hjdrk\" (UID: \"9e1f7579-f5a2-44cd-b77f-71219cd8827d\") " pod="kube-system/kindnet-hjdrk"
	Nov 15 10:35:20 default-k8s-diff-port-026691 kubelet[1460]: I1115 10:35:20.455369    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhh82\" (UniqueName: \"kubernetes.io/projected/9e1f7579-f5a2-44cd-b77f-71219cd8827d-kube-api-access-hhh82\") pod \"kindnet-hjdrk\" (UID: \"9e1f7579-f5a2-44cd-b77f-71219cd8827d\") " pod="kube-system/kindnet-hjdrk"
	Nov 15 10:35:20 default-k8s-diff-port-026691 kubelet[1460]: W1115 10:35:20.689423    1460 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/acb25a518a850a99b04a9ecc3ee276eeb45082aa05801ad29c50dc8761212798/crio-07c701352514c186341edefd795842174ecff2c4bf2b16b7ef843af8b21deca5 WatchSource:0}: Error finding container 07c701352514c186341edefd795842174ecff2c4bf2b16b7ef843af8b21deca5: Status 404 returned error can't find the container with id 07c701352514c186341edefd795842174ecff2c4bf2b16b7ef843af8b21deca5
	Nov 15 10:35:20 default-k8s-diff-port-026691 kubelet[1460]: W1115 10:35:20.690372    1460 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/acb25a518a850a99b04a9ecc3ee276eeb45082aa05801ad29c50dc8761212798/crio-89e181a537fdd4f29b87e80fa0ba787e9d05fc8aca7389581bb399fc74a55db6 WatchSource:0}: Error finding container 89e181a537fdd4f29b87e80fa0ba787e9d05fc8aca7389581bb399fc74a55db6: Status 404 returned error can't find the container with id 89e181a537fdd4f29b87e80fa0ba787e9d05fc8aca7389581bb399fc74a55db6
	Nov 15 10:35:21 default-k8s-diff-port-026691 kubelet[1460]: I1115 10:35:21.244274    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-hjdrk" podStartSLOduration=1.244248724 podStartE2EDuration="1.244248724s" podCreationTimestamp="2025-11-15 10:35:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:35:21.243604489 +0000 UTC m=+6.282559454" watchObservedRunningTime="2025-11-15 10:35:21.244248724 +0000 UTC m=+6.283203648"
	Nov 15 10:35:25 default-k8s-diff-port-026691 kubelet[1460]: I1115 10:35:25.147277    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c5bw5" podStartSLOduration=5.147258793 podStartE2EDuration="5.147258793s" podCreationTimestamp="2025-11-15 10:35:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:35:21.326749325 +0000 UTC m=+6.365704288" watchObservedRunningTime="2025-11-15 10:35:25.147258793 +0000 UTC m=+10.186213736"
	Nov 15 10:36:01 default-k8s-diff-port-026691 kubelet[1460]: I1115 10:36:01.518236    1460 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 15 10:36:01 default-k8s-diff-port-026691 kubelet[1460]: I1115 10:36:01.600852    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6c4ca54-e0fe-45ee-88a6-33bdccbb876c-config-volume\") pod \"coredns-66bc5c9577-5q2j4\" (UID: \"e6c4ca54-e0fe-45ee-88a6-33bdccbb876c\") " pod="kube-system/coredns-66bc5c9577-5q2j4"
	Nov 15 10:36:01 default-k8s-diff-port-026691 kubelet[1460]: I1115 10:36:01.600899    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7dedf7a9-415d-4260-b225-7ca171744768-tmp\") pod \"storage-provisioner\" (UID: \"7dedf7a9-415d-4260-b225-7ca171744768\") " pod="kube-system/storage-provisioner"
	Nov 15 10:36:01 default-k8s-diff-port-026691 kubelet[1460]: I1115 10:36:01.600916    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9rnm\" (UniqueName: \"kubernetes.io/projected/7dedf7a9-415d-4260-b225-7ca171744768-kube-api-access-s9rnm\") pod \"storage-provisioner\" (UID: \"7dedf7a9-415d-4260-b225-7ca171744768\") " pod="kube-system/storage-provisioner"
	Nov 15 10:36:01 default-k8s-diff-port-026691 kubelet[1460]: I1115 10:36:01.600934    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4kw2\" (UniqueName: \"kubernetes.io/projected/e6c4ca54-e0fe-45ee-88a6-33bdccbb876c-kube-api-access-n4kw2\") pod \"coredns-66bc5c9577-5q2j4\" (UID: \"e6c4ca54-e0fe-45ee-88a6-33bdccbb876c\") " pod="kube-system/coredns-66bc5c9577-5q2j4"
	Nov 15 10:36:01 default-k8s-diff-port-026691 kubelet[1460]: W1115 10:36:01.865719    1460 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/acb25a518a850a99b04a9ecc3ee276eeb45082aa05801ad29c50dc8761212798/crio-f61b3f9defa4d9a4af25d043dc7c9ad4a1a0cc2e5aa7241c08fc0a2dcfe6b3df WatchSource:0}: Error finding container f61b3f9defa4d9a4af25d043dc7c9ad4a1a0cc2e5aa7241c08fc0a2dcfe6b3df: Status 404 returned error can't find the container with id f61b3f9defa4d9a4af25d043dc7c9ad4a1a0cc2e5aa7241c08fc0a2dcfe6b3df
	Nov 15 10:36:01 default-k8s-diff-port-026691 kubelet[1460]: W1115 10:36:01.886510    1460 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/acb25a518a850a99b04a9ecc3ee276eeb45082aa05801ad29c50dc8761212798/crio-edcdfac6e7ae8df90e08aea5e0fe925ddc60f8b52c7c0f35897b341e1d655ffe WatchSource:0}: Error finding container edcdfac6e7ae8df90e08aea5e0fe925ddc60f8b52c7c0f35897b341e1d655ffe: Status 404 returned error can't find the container with id edcdfac6e7ae8df90e08aea5e0fe925ddc60f8b52c7c0f35897b341e1d655ffe
	Nov 15 10:36:02 default-k8s-diff-port-026691 kubelet[1460]: I1115 10:36:02.343526    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.343501953 podStartE2EDuration="41.343501953s" podCreationTimestamp="2025-11-15 10:35:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:36:02.332994798 +0000 UTC m=+47.371949765" watchObservedRunningTime="2025-11-15 10:36:02.343501953 +0000 UTC m=+47.382456897"
	Nov 15 10:36:04 default-k8s-diff-port-026691 kubelet[1460]: I1115 10:36:04.501205    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-5q2j4" podStartSLOduration=44.501176519 podStartE2EDuration="44.501176519s" podCreationTimestamp="2025-11-15 10:35:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:36:02.343344647 +0000 UTC m=+47.382299626" watchObservedRunningTime="2025-11-15 10:36:04.501176519 +0000 UTC m=+49.540131464"
	Nov 15 10:36:04 default-k8s-diff-port-026691 kubelet[1460]: I1115 10:36:04.620932    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmlnx\" (UniqueName: \"kubernetes.io/projected/8e2e3c26-b883-4c84-b07b-e107e5b36bbc-kube-api-access-xmlnx\") pod \"busybox\" (UID: \"8e2e3c26-b883-4c84-b07b-e107e5b36bbc\") " pod="default/busybox"
	Nov 15 10:36:04 default-k8s-diff-port-026691 kubelet[1460]: W1115 10:36:04.831437    1460 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/acb25a518a850a99b04a9ecc3ee276eeb45082aa05801ad29c50dc8761212798/crio-849926d34b41ada37431cbadf53528116129ed3d262c77c57fc26aa31edb6294 WatchSource:0}: Error finding container 849926d34b41ada37431cbadf53528116129ed3d262c77c57fc26aa31edb6294: Status 404 returned error can't find the container with id 849926d34b41ada37431cbadf53528116129ed3d262c77c57fc26aa31edb6294
	
	
	==> storage-provisioner [d3c681e2b21c3543c843dedb2f20dbb175f98ed76626163f64c4cb96575f0daf] <==
	I1115 10:36:01.936993       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1115 10:36:01.940350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:01.947633       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:36:01.947849       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 10:36:01.952212       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e5b7cf19-8a06-483d-895a-a97445d789b0", APIVersion:"v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-026691_dc97cec9-191b-4f33-ac46-55052399561b became leader
	I1115 10:36:01.952406       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-026691_dc97cec9-191b-4f33-ac46-55052399561b!
	W1115 10:36:01.956902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:01.980614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:36:02.053444       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-026691_dc97cec9-191b-4f33-ac46-55052399561b!
	W1115 10:36:03.986200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:03.991540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:05.994817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:06.000462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:08.004131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:08.009584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:10.012813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:10.017135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:12.021226       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:12.042151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:14.046493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:14.053068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:16.056606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:16.060508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:18.064772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:18.068518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-026691 -n default-k8s-diff-port-026691
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-026691 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-086099 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-086099 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (243.29272ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:36:25Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-086099 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-086099
helpers_test.go:243: (dbg) docker inspect newest-cni-086099:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e6860e06d975adb47ad2079893175972a79fb3e0d03ee7ef837f4b7f29e21e14",
	        "Created": "2025-11-15T10:35:57.263723596Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 379930,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:35:57.297749674Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/e6860e06d975adb47ad2079893175972a79fb3e0d03ee7ef837f4b7f29e21e14/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e6860e06d975adb47ad2079893175972a79fb3e0d03ee7ef837f4b7f29e21e14/hostname",
	        "HostsPath": "/var/lib/docker/containers/e6860e06d975adb47ad2079893175972a79fb3e0d03ee7ef837f4b7f29e21e14/hosts",
	        "LogPath": "/var/lib/docker/containers/e6860e06d975adb47ad2079893175972a79fb3e0d03ee7ef837f4b7f29e21e14/e6860e06d975adb47ad2079893175972a79fb3e0d03ee7ef837f4b7f29e21e14-json.log",
	        "Name": "/newest-cni-086099",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-086099:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-086099",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e6860e06d975adb47ad2079893175972a79fb3e0d03ee7ef837f4b7f29e21e14",
	                "LowerDir": "/var/lib/docker/overlay2/9482d5d3fc87916f5559bace41af8ac4799d27e44564678531b92a41e1b27eaf-init/diff:/var/lib/docker/overlay2/507c85b6fb43d9c98216fac79d7a8d08dd20b2a63d0fbfc758336e5d38c04044/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9482d5d3fc87916f5559bace41af8ac4799d27e44564678531b92a41e1b27eaf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9482d5d3fc87916f5559bace41af8ac4799d27e44564678531b92a41e1b27eaf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9482d5d3fc87916f5559bace41af8ac4799d27e44564678531b92a41e1b27eaf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-086099",
	                "Source": "/var/lib/docker/volumes/newest-cni-086099/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-086099",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-086099",
	                "name.minikube.sigs.k8s.io": "newest-cni-086099",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "e81bb2ad2156e3244d0abcb5aa34e938b5319ce2491911a5a15d1feaf390f722",
	            "SandboxKey": "/var/run/docker/netns/e81bb2ad2156",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-086099": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "09708c4610e17a8aeca1147b11bbc4d170ab97359e0b99b5bd4de917c0e4fd72",
	                    "EndpointID": "02ea83a64bd06512d96074a66f06bf1c2004e3f8b8fd7a5f3f6d1d21d4b266a8",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "92:04:51:28:8e:91",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-086099",
	                        "e6860e06d975"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-086099 -n newest-cni-086099
E1115 10:36:25.883227   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kindnet-931243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:36:25.889666   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kindnet-931243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:36:25.901078   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kindnet-931243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-086099 logs -n 25
E1115 10:36:25.923312   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kindnet-931243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:36:25.964732   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kindnet-931243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:36:26.046868   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kindnet-931243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:36:26.208553   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kindnet-931243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:36:26.530729   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kindnet-931243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-931243 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p bridge-931243 sudo crio config                                                                                                                                                                                                             │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ delete  │ -p bridge-931243                                                                                                                                                                                                                              │ bridge-931243                │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ delete  │ -p disable-driver-mounts-435527                                                                                                                                                                                                               │ disable-driver-mounts-435527 │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ start   │ -p default-k8s-diff-port-026691 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-026691 │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:36 UTC │
	│ addons  │ enable dashboard -p no-preload-283677 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ start   │ -p no-preload-283677 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:35 UTC │
	│ addons  │ enable metrics-server -p embed-certs-719574 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ stop    │ -p embed-certs-719574 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ image   │ old-k8s-version-087235 image list --format=json                                                                                                                                                                                               │ old-k8s-version-087235       │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ pause   │ -p old-k8s-version-087235 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-087235       │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ delete  │ -p old-k8s-version-087235                                                                                                                                                                                                                     │ old-k8s-version-087235       │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ addons  │ enable dashboard -p embed-certs-719574 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ start   │ -p embed-certs-719574 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ delete  │ -p old-k8s-version-087235                                                                                                                                                                                                                     │ old-k8s-version-087235       │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ start   │ -p newest-cni-086099 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:36 UTC │
	│ image   │ no-preload-283677 image list --format=json                                                                                                                                                                                                    │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ pause   │ -p no-preload-283677 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ delete  │ -p no-preload-283677                                                                                                                                                                                                                          │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ delete  │ -p no-preload-283677                                                                                                                                                                                                                          │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-026691 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-026691 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-026691 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-026691 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-086099 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:35:51
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:35:51.880635  378695 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:35:51.880972  378695 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:35:51.880985  378695 out.go:374] Setting ErrFile to fd 2...
	I1115 10:35:51.880990  378695 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:35:51.881260  378695 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 10:35:51.881819  378695 out.go:368] Setting JSON to false
	I1115 10:35:51.883178  378695 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8289,"bootTime":1763194663,"procs":305,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 10:35:51.883287  378695 start.go:143] virtualization: kvm guest
	I1115 10:35:51.885121  378695 out.go:179] * [newest-cni-086099] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 10:35:51.886362  378695 notify.go:221] Checking for updates...
	I1115 10:35:51.886418  378695 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 10:35:51.887691  378695 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:35:51.888785  378695 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:35:51.889883  378695 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-55448/.minikube
	I1115 10:35:51.891041  378695 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 10:35:51.895496  378695 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:35:51.897243  378695 config.go:182] Loaded profile config "default-k8s-diff-port-026691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:51.897400  378695 config.go:182] Loaded profile config "embed-certs-719574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:51.897562  378695 config.go:182] Loaded profile config "no-preload-283677": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:51.897686  378695 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:35:51.923206  378695 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 10:35:51.923309  378695 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:35:51.980066  378695 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:true NGoroutines:77 SystemTime:2025-11-15 10:35:51.97030866 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be removed
by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:35:51.980169  378695 docker.go:319] overlay module found
	I1115 10:35:51.982196  378695 out.go:179] * Using the docker driver based on user configuration
	I1115 10:35:51.983355  378695 start.go:309] selected driver: docker
	I1115 10:35:51.983369  378695 start.go:930] validating driver "docker" against <nil>
	I1115 10:35:51.983380  378695 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:35:51.984213  378695 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:35:52.044923  378695 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:76 SystemTime:2025-11-15 10:35:52.034876039 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:35:52.045179  378695 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1115 10:35:52.045216  378695 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1115 10:35:52.045457  378695 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1115 10:35:52.047189  378695 out.go:179] * Using Docker driver with root privileges
	I1115 10:35:52.048407  378695 cni.go:84] Creating CNI manager for ""
	I1115 10:35:52.048473  378695 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:35:52.048484  378695 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 10:35:52.048535  378695 start.go:353] cluster config:
	{Name:newest-cni-086099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-086099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:35:52.049826  378695 out.go:179] * Starting "newest-cni-086099" primary control-plane node in "newest-cni-086099" cluster
	I1115 10:35:52.050909  378695 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:35:52.052056  378695 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:35:52.053065  378695 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:35:52.053098  378695 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1115 10:35:52.053116  378695 cache.go:65] Caching tarball of preloaded images
	I1115 10:35:52.053151  378695 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:35:52.053229  378695 preload.go:238] Found /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 10:35:52.053246  378695 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:35:52.053398  378695 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/config.json ...
	I1115 10:35:52.053424  378695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/config.json: {Name:mkf8d02e5e19217377f4420029b0cc1adccada68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:52.074755  378695 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:35:52.074774  378695 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:35:52.074789  378695 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:35:52.074816  378695 start.go:360] acquireMachinesLock for newest-cni-086099: {Name:mk9065475199777f18a95aabcc9dbfda12f72647 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:35:52.074909  378695 start.go:364] duration metric: took 76.491µs to acquireMachinesLock for "newest-cni-086099"
	I1115 10:35:52.074932  378695 start.go:93] Provisioning new machine with config: &{Name:newest-cni-086099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-086099 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:35:52.075027  378695 start.go:125] createHost starting for "" (driver="docker")
	W1115 10:35:48.630700  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	W1115 10:35:50.630784  368849 pod_ready.go:104] pod "coredns-66bc5c9577-66nkj" is not "Ready", error: <nil>
	I1115 10:35:51.131341  368849 pod_ready.go:94] pod "coredns-66bc5c9577-66nkj" is "Ready"
	I1115 10:35:51.131376  368849 pod_ready.go:86] duration metric: took 41.005975825s for pod "coredns-66bc5c9577-66nkj" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:51.134231  368849 pod_ready.go:83] waiting for pod "etcd-no-preload-283677" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:51.138317  368849 pod_ready.go:94] pod "etcd-no-preload-283677" is "Ready"
	I1115 10:35:51.138345  368849 pod_ready.go:86] duration metric: took 4.088368ms for pod "etcd-no-preload-283677" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:51.140317  368849 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-283677" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:51.143990  368849 pod_ready.go:94] pod "kube-apiserver-no-preload-283677" is "Ready"
	I1115 10:35:51.144012  368849 pod_ready.go:86] duration metric: took 3.672536ms for pod "kube-apiserver-no-preload-283677" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:51.145780  368849 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-283677" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:51.329880  368849 pod_ready.go:94] pod "kube-controller-manager-no-preload-283677" is "Ready"
	I1115 10:35:51.329907  368849 pod_ready.go:86] duration metric: took 184.110671ms for pod "kube-controller-manager-no-preload-283677" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:51.529891  368849 pod_ready.go:83] waiting for pod "kube-proxy-vjbxg" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:51.929529  368849 pod_ready.go:94] pod "kube-proxy-vjbxg" is "Ready"
	I1115 10:35:51.929559  368849 pod_ready.go:86] duration metric: took 399.636424ms for pod "kube-proxy-vjbxg" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 10:35:49.488114  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	W1115 10:35:51.988145  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	I1115 10:35:52.129598  368849 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-283677" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:52.529568  368849 pod_ready.go:94] pod "kube-scheduler-no-preload-283677" is "Ready"
	I1115 10:35:52.529597  368849 pod_ready.go:86] duration metric: took 399.970584ms for pod "kube-scheduler-no-preload-283677" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:52.529608  368849 pod_ready.go:40] duration metric: took 42.409442772s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:35:52.581745  368849 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:35:52.583831  368849 out.go:179] * Done! kubectl is now configured to use "no-preload-283677" cluster and "default" namespace by default
	I1115 10:35:49.830432  377744 out.go:252] * Restarting existing docker container for "embed-certs-719574" ...
	I1115 10:35:49.830517  377744 cli_runner.go:164] Run: docker start embed-certs-719574
	I1115 10:35:50.114791  377744 cli_runner.go:164] Run: docker container inspect embed-certs-719574 --format={{.State.Status}}
	I1115 10:35:50.134754  377744 kic.go:430] container "embed-certs-719574" state is running.
	I1115 10:35:50.135204  377744 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-719574
	I1115 10:35:50.154606  377744 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/embed-certs-719574/config.json ...
	I1115 10:35:50.154928  377744 machine.go:94] provisionDockerMachine start ...
	I1115 10:35:50.155043  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:50.174749  377744 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:50.175176  377744 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1115 10:35:50.175216  377744 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:35:50.176012  377744 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38764->127.0.0.1:33119: read: connection reset by peer
	I1115 10:35:53.310173  377744 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-719574
	
	I1115 10:35:53.310214  377744 ubuntu.go:182] provisioning hostname "embed-certs-719574"
	I1115 10:35:53.310354  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:53.329392  377744 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:53.329615  377744 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1115 10:35:53.329634  377744 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-719574 && echo "embed-certs-719574" | sudo tee /etc/hostname
	I1115 10:35:53.472294  377744 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-719574
	
	I1115 10:35:53.472411  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:53.492862  377744 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:53.493213  377744 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1115 10:35:53.493264  377744 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-719574' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-719574/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-719574' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:35:53.625059  377744 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:35:53.625092  377744 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-55448/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-55448/.minikube}
	I1115 10:35:53.625126  377744 ubuntu.go:190] setting up certificates
	I1115 10:35:53.625143  377744 provision.go:84] configureAuth start
	I1115 10:35:53.625244  377744 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-719574
	I1115 10:35:53.644516  377744 provision.go:143] copyHostCerts
	I1115 10:35:53.644586  377744 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem, removing ...
	I1115 10:35:53.644598  377744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem
	I1115 10:35:53.644672  377744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem (1082 bytes)
	I1115 10:35:53.644781  377744 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem, removing ...
	I1115 10:35:53.644790  377744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem
	I1115 10:35:53.644816  377744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem (1123 bytes)
	I1115 10:35:53.644891  377744 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem, removing ...
	I1115 10:35:53.644898  377744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem
	I1115 10:35:53.644921  377744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem (1679 bytes)
	I1115 10:35:53.645022  377744 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem org=jenkins.embed-certs-719574 san=[127.0.0.1 192.168.94.2 embed-certs-719574 localhost minikube]
	I1115 10:35:53.893496  377744 provision.go:177] copyRemoteCerts
	I1115 10:35:53.893597  377744 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:35:53.893653  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:53.913597  377744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/embed-certs-719574/id_rsa Username:docker}
	I1115 10:35:54.011809  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:35:54.029841  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1115 10:35:54.048781  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1115 10:35:54.067015  377744 provision.go:87] duration metric: took 441.854991ms to configureAuth
	I1115 10:35:54.067059  377744 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:35:54.067256  377744 config.go:182] Loaded profile config "embed-certs-719574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:54.067376  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:54.087249  377744 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:54.087454  377744 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1115 10:35:54.087469  377744 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:35:54.383177  377744 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:35:54.383205  377744 machine.go:97] duration metric: took 4.228252503s to provisionDockerMachine
	I1115 10:35:54.383221  377744 start.go:293] postStartSetup for "embed-certs-719574" (driver="docker")
	I1115 10:35:54.383246  377744 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:35:54.383323  377744 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:35:54.383389  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:54.402613  377744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/embed-certs-719574/id_rsa Username:docker}
	I1115 10:35:54.497991  377744 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:35:54.501812  377744 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:35:54.501845  377744 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:35:54.501859  377744 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/addons for local assets ...
	I1115 10:35:54.501927  377744 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/files for local assets ...
	I1115 10:35:54.502073  377744 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem -> 589622.pem in /etc/ssl/certs
	I1115 10:35:54.502192  377744 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:35:54.510401  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:35:54.528845  377744 start.go:296] duration metric: took 145.608503ms for postStartSetup
	I1115 10:35:54.528929  377744 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:35:54.529033  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:54.548704  377744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/embed-certs-719574/id_rsa Username:docker}
	I1115 10:35:52.076936  378695 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 10:35:52.077138  378695 start.go:159] libmachine.API.Create for "newest-cni-086099" (driver="docker")
	I1115 10:35:52.077166  378695 client.go:173] LocalClient.Create starting
	I1115 10:35:52.077242  378695 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem
	I1115 10:35:52.077273  378695 main.go:143] libmachine: Decoding PEM data...
	I1115 10:35:52.077289  378695 main.go:143] libmachine: Parsing certificate...
	I1115 10:35:52.077346  378695 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem
	I1115 10:35:52.077364  378695 main.go:143] libmachine: Decoding PEM data...
	I1115 10:35:52.077373  378695 main.go:143] libmachine: Parsing certificate...
	I1115 10:35:52.077693  378695 cli_runner.go:164] Run: docker network inspect newest-cni-086099 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 10:35:52.094513  378695 cli_runner.go:211] docker network inspect newest-cni-086099 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 10:35:52.094577  378695 network_create.go:284] running [docker network inspect newest-cni-086099] to gather additional debugging logs...
	I1115 10:35:52.094597  378695 cli_runner.go:164] Run: docker network inspect newest-cni-086099
	W1115 10:35:52.112168  378695 cli_runner.go:211] docker network inspect newest-cni-086099 returned with exit code 1
	I1115 10:35:52.112212  378695 network_create.go:287] error running [docker network inspect newest-cni-086099]: docker network inspect newest-cni-086099: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-086099 not found
	I1115 10:35:52.112227  378695 network_create.go:289] output of [docker network inspect newest-cni-086099]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-086099 not found
	
	** /stderr **
	I1115 10:35:52.112312  378695 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:35:52.130531  378695 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-77644897380e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:42:f9:e2:83:c3:91} reservation:<nil>}
	I1115 10:35:52.131072  378695 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e808d03b2052 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:03:24:ef:92:a1} reservation:<nil>}
	I1115 10:35:52.131784  378695 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3fb45eeaa66e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:1a:b3:b7:0f:2e:99} reservation:<nil>}
	I1115 10:35:52.132406  378695 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-31f43b806931 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:0a:cc:8c:d8:0d:c5} reservation:<nil>}
	I1115 10:35:52.133098  378695 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-a057ad05bea0 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:4e:4d:10:e4:db:cb} reservation:<nil>}
	I1115 10:35:52.133911  378695 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-5402d8c1e78a IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:0a:f0:66:0a:22:a5} reservation:<nil>}
	I1115 10:35:52.134802  378695 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f61840}
	I1115 10:35:52.134825  378695 network_create.go:124] attempt to create docker network newest-cni-086099 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1115 10:35:52.134865  378695 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-086099 newest-cni-086099
	I1115 10:35:52.184306  378695 network_create.go:108] docker network newest-cni-086099 192.168.103.0/24 created
	I1115 10:35:52.184341  378695 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-086099" container
	I1115 10:35:52.184418  378695 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 10:35:52.204038  378695 cli_runner.go:164] Run: docker volume create newest-cni-086099 --label name.minikube.sigs.k8s.io=newest-cni-086099 --label created_by.minikube.sigs.k8s.io=true
	I1115 10:35:52.223076  378695 oci.go:103] Successfully created a docker volume newest-cni-086099
	I1115 10:35:52.223154  378695 cli_runner.go:164] Run: docker run --rm --name newest-cni-086099-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-086099 --entrypoint /usr/bin/test -v newest-cni-086099:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 10:35:52.620626  378695 oci.go:107] Successfully prepared a docker volume newest-cni-086099
	I1115 10:35:52.620689  378695 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:35:52.620707  378695 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 10:35:52.620778  378695 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-086099:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1115 10:35:54.641677  377744 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:35:54.646490  377744 fix.go:56] duration metric: took 4.836578375s for fixHost
	I1115 10:35:54.646531  377744 start.go:83] releasing machines lock for "embed-certs-719574", held for 4.836643994s
	I1115 10:35:54.646605  377744 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-719574
	I1115 10:35:54.665925  377744 ssh_runner.go:195] Run: cat /version.json
	I1115 10:35:54.666009  377744 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:35:54.666054  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:54.666061  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:54.685752  377744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/embed-certs-719574/id_rsa Username:docker}
	I1115 10:35:54.686933  377744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/embed-certs-719574/id_rsa Username:docker}
	I1115 10:35:54.832262  377744 ssh_runner.go:195] Run: systemctl --version
	I1115 10:35:54.839294  377744 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:35:54.881869  377744 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:35:54.887543  377744 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:35:54.887616  377744 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:35:54.897470  377744 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 10:35:54.897495  377744 start.go:496] detecting cgroup driver to use...
	I1115 10:35:54.897526  377744 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:35:54.897575  377744 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:35:54.915183  377744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:35:54.936918  377744 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:35:54.937042  377744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:35:54.959514  377744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:35:54.974364  377744 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:35:55.064629  377744 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:35:55.149431  377744 docker.go:234] disabling docker service ...
	I1115 10:35:55.149491  377744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:35:55.164826  377744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:35:55.178539  377744 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:35:55.258146  377744 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:35:55.336854  377744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:35:55.350099  377744 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:35:55.371361  377744 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:35:55.371428  377744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:55.392170  377744 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:35:55.392226  377744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:55.402091  377744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:55.464259  377744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:55.527554  377744 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:35:55.536601  377744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:55.581816  377744 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:55.591398  377744 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:55.656666  377744 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:35:55.665181  377744 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:35:55.673411  377744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:55.753200  377744 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:35:57.278236  377744 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.524976792s)
	I1115 10:35:57.278272  377744 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:35:57.278324  377744 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:35:57.282657  377744 start.go:564] Will wait 60s for crictl version
	I1115 10:35:57.282733  377744 ssh_runner.go:195] Run: which crictl
	I1115 10:35:57.286574  377744 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:35:57.314817  377744 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:35:57.314911  377744 ssh_runner.go:195] Run: crio --version
	I1115 10:35:57.343990  377744 ssh_runner.go:195] Run: crio --version
	I1115 10:35:57.373426  377744 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1115 10:35:54.488332  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	W1115 10:35:56.987904  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	I1115 10:35:57.378513  377744 cli_runner.go:164] Run: docker network inspect embed-certs-719574 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:35:57.402028  377744 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1115 10:35:57.409345  377744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:35:57.420512  377744 kubeadm.go:884] updating cluster {Name:embed-certs-719574 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-719574 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:35:57.420680  377744 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:35:57.420740  377744 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:35:57.458228  377744 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:35:57.458259  377744 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:35:57.458316  377744 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:35:57.485027  377744 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:35:57.485050  377744 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:35:57.485058  377744 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1115 10:35:57.485169  377744 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-719574 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-719574 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:35:57.485252  377744 ssh_runner.go:195] Run: crio config
	I1115 10:35:57.536095  377744 cni.go:84] Creating CNI manager for ""
	I1115 10:35:57.536127  377744 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:35:57.536147  377744 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:35:57.536177  377744 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-719574 NodeName:embed-certs-719574 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:35:57.536329  377744 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-719574"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:35:57.536407  377744 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:35:57.544702  377744 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:35:57.544775  377744 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:35:57.554019  377744 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1115 10:35:57.569040  377744 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:35:57.585285  377744 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1115 10:35:57.600345  377744 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:35:57.604627  377744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:35:57.619569  377744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:57.710162  377744 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:35:57.731269  377744 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/embed-certs-719574 for IP: 192.168.94.2
	I1115 10:35:57.731297  377744 certs.go:195] generating shared ca certs ...
	I1115 10:35:57.731319  377744 certs.go:227] acquiring lock for ca certs: {Name:mkec8c0ab2b564f4fafe1f7fc86089efa994f6db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:57.731508  377744 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key
	I1115 10:35:57.731564  377744 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key
	I1115 10:35:57.731581  377744 certs.go:257] generating profile certs ...
	I1115 10:35:57.731700  377744 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/embed-certs-719574/client.key
	I1115 10:35:57.731784  377744 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/embed-certs-719574/apiserver.key.788254b7
	I1115 10:35:57.731906  377744 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/embed-certs-719574/proxy-client.key
	I1115 10:35:57.732110  377744 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem (1338 bytes)
	W1115 10:35:57.732161  377744 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962_empty.pem, impossibly tiny 0 bytes
	I1115 10:35:57.732182  377744 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:35:57.732220  377744 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:35:57.732263  377744 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:35:57.732297  377744 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem (1679 bytes)
	I1115 10:35:57.732354  377744 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:35:57.733199  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:35:57.753928  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:35:57.776212  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:35:57.798569  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:35:57.855574  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/embed-certs-719574/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1115 10:35:57.881192  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/embed-certs-719574/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 10:35:57.958309  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/embed-certs-719574/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:35:57.978725  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/embed-certs-719574/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:35:58.001721  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:35:58.020846  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem --> /usr/share/ca-certificates/58962.pem (1338 bytes)
	I1115 10:35:58.039367  377744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /usr/share/ca-certificates/589622.pem (1708 bytes)
	I1115 10:35:58.064830  377744 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:35:58.080795  377744 ssh_runner.go:195] Run: openssl version
	I1115 10:35:58.087121  377744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:35:58.095754  377744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:35:58.099496  377744 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:41 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:35:58.099554  377744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:35:58.135273  377744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:35:58.145763  377744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/58962.pem && ln -fs /usr/share/ca-certificates/58962.pem /etc/ssl/certs/58962.pem"
	I1115 10:35:58.156943  377744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/58962.pem
	I1115 10:35:58.161920  377744 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:47 /usr/share/ca-certificates/58962.pem
	I1115 10:35:58.162041  377744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/58962.pem
	I1115 10:35:58.206129  377744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/58962.pem /etc/ssl/certs/51391683.0"
	I1115 10:35:58.214420  377744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/589622.pem && ln -fs /usr/share/ca-certificates/589622.pem /etc/ssl/certs/589622.pem"
	I1115 10:35:58.223061  377744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/589622.pem
	I1115 10:35:58.226827  377744 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:47 /usr/share/ca-certificates/589622.pem
	I1115 10:35:58.226872  377744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/589622.pem
	I1115 10:35:58.268503  377744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/589622.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:35:58.278233  377744 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:35:58.282629  377744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 10:35:58.349655  377744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 10:35:58.454042  377744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 10:35:58.576363  377744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 10:35:58.746644  377744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 10:35:58.782106  377744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 10:35:58.871080  377744 kubeadm.go:401] StartCluster: {Name:embed-certs-719574 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-719574 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:35:58.871213  377744 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:35:58.871280  377744 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:35:58.960244  377744 cri.go:89] found id: "34a183c86eaa1af613ae4708887513840319366cbb4179e7f1678698297eade2"
	I1115 10:35:58.960271  377744 cri.go:89] found id: "a04037d02e2f108f2bc0bc86b223e37c201372e3009a0b55923609e9c3f5b7b5"
	I1115 10:35:58.960278  377744 cri.go:89] found id: "d2523d5b7384ac5c1a3b025b86aa00148daa68b776dfada0c3e9b0dd751d444c"
	I1115 10:35:58.960283  377744 cri.go:89] found id: "56627175c47b813bf4460ca13a901ec3152cbb4d22f0362e40133db8b19b3e87"
	I1115 10:35:58.960298  377744 cri.go:89] found id: ""
	I1115 10:35:58.960336  377744 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 10:35:58.974645  377744 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:35:58Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:35:58.974767  377744 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:35:59.046786  377744 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 10:35:59.046808  377744 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 10:35:59.046859  377744 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 10:35:59.056636  377744 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:35:59.057549  377744 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-719574" does not appear in /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:35:59.058047  377744 kubeconfig.go:62] /home/jenkins/minikube-integration/21894-55448/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-719574" cluster setting kubeconfig missing "embed-certs-719574" context setting]
	I1115 10:35:59.058858  377744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:59.060778  377744 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 10:35:59.069779  377744 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1115 10:35:59.069815  377744 kubeadm.go:602] duration metric: took 22.998235ms to restartPrimaryControlPlane
	I1115 10:35:59.069826  377744 kubeadm.go:403] duration metric: took 198.758279ms to StartCluster
	I1115 10:35:59.069846  377744 settings.go:142] acquiring lock: {Name:mk94cfac0f6eef2479181468a7a8082a6e5c8f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:59.069922  377744 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:35:59.071492  377744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:59.071756  377744 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:35:59.071888  377744 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:35:59.072018  377744 config.go:182] Loaded profile config "embed-certs-719574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:59.072030  377744 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-719574"
	I1115 10:35:59.072050  377744 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-719574"
	W1115 10:35:59.072059  377744 addons.go:248] addon storage-provisioner should already be in state true
	I1115 10:35:59.072081  377744 addons.go:70] Setting dashboard=true in profile "embed-certs-719574"
	I1115 10:35:59.072126  377744 addons.go:239] Setting addon dashboard=true in "embed-certs-719574"
	W1115 10:35:59.072141  377744 addons.go:248] addon dashboard should already be in state true
	I1115 10:35:59.072091  377744 host.go:66] Checking if "embed-certs-719574" exists ...
	I1115 10:35:59.072176  377744 host.go:66] Checking if "embed-certs-719574" exists ...
	I1115 10:35:59.072082  377744 addons.go:70] Setting default-storageclass=true in profile "embed-certs-719574"
	I1115 10:35:59.072227  377744 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-719574"
	I1115 10:35:59.072560  377744 cli_runner.go:164] Run: docker container inspect embed-certs-719574 --format={{.State.Status}}
	I1115 10:35:59.072736  377744 cli_runner.go:164] Run: docker container inspect embed-certs-719574 --format={{.State.Status}}
	I1115 10:35:59.072775  377744 cli_runner.go:164] Run: docker container inspect embed-certs-719574 --format={{.State.Status}}
	I1115 10:35:59.073400  377744 out.go:179] * Verifying Kubernetes components...
	I1115 10:35:59.074646  377744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:59.097674  377744 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1115 10:35:59.097741  377744 addons.go:239] Setting addon default-storageclass=true in "embed-certs-719574"
	W1115 10:35:59.097755  377744 addons.go:248] addon default-storageclass should already be in state true
	I1115 10:35:59.097682  377744 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:35:59.097790  377744 host.go:66] Checking if "embed-certs-719574" exists ...
	I1115 10:35:59.098261  377744 cli_runner.go:164] Run: docker container inspect embed-certs-719574 --format={{.State.Status}}
	I1115 10:35:59.098922  377744 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:35:59.098988  377744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:35:59.099040  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:59.103435  377744 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1115 10:35:59.104647  377744 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1115 10:35:59.104679  377744 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1115 10:35:59.104749  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:59.119302  377744 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:35:59.119331  377744 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:35:59.119398  377744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:35:59.120171  377744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/embed-certs-719574/id_rsa Username:docker}
	I1115 10:35:59.125098  377744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/embed-certs-719574/id_rsa Username:docker}
	I1115 10:35:59.137515  377744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/embed-certs-719574/id_rsa Username:docker}
	I1115 10:35:59.461029  377744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:35:59.461402  377744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:35:59.465397  377744 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1115 10:35:59.465421  377744 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1115 10:35:59.550018  377744 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:35:59.557165  377744 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1115 10:35:59.557200  377744 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1115 10:35:57.180648  378695 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-086099:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.559815955s)
	I1115 10:35:57.180688  378695 kic.go:203] duration metric: took 4.559978988s to extract preloaded images to volume ...
	W1115 10:35:57.180808  378695 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1115 10:35:57.180907  378695 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 10:35:57.245170  378695 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-086099 --name newest-cni-086099 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-086099 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-086099 --network newest-cni-086099 --ip 192.168.103.2 --volume newest-cni-086099:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 10:35:57.553341  378695 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Running}}
	I1115 10:35:57.574001  378695 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:35:57.595723  378695 cli_runner.go:164] Run: docker exec newest-cni-086099 stat /var/lib/dpkg/alternatives/iptables
	I1115 10:35:57.648675  378695 oci.go:144] the created container "newest-cni-086099" has a running status.
	I1115 10:35:57.648711  378695 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa...
	I1115 10:35:57.758503  378695 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 10:35:57.788103  378695 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:35:57.813502  378695 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 10:35:57.813525  378695 kic_runner.go:114] Args: [docker exec --privileged newest-cni-086099 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 10:35:57.866879  378695 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:35:57.892578  378695 machine.go:94] provisionDockerMachine start ...
	I1115 10:35:57.892683  378695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:35:57.916142  378695 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:57.916445  378695 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1115 10:35:57.916463  378695 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:35:57.917246  378695 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47936->127.0.0.1:33124: read: connection reset by peer
	I1115 10:36:01.055800  378695 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-086099
	
	I1115 10:36:01.055829  378695 ubuntu.go:182] provisioning hostname "newest-cni-086099"
	I1115 10:36:01.055909  378695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:01.077686  378695 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:01.078023  378695 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1115 10:36:01.078042  378695 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-086099 && echo "newest-cni-086099" | sudo tee /etc/hostname
	I1115 10:36:01.223717  378695 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-086099
	
	I1115 10:36:01.223807  378695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:01.242452  378695 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:01.242668  378695 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1115 10:36:01.242685  378695 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-086099' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-086099/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-086099' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:36:01.376856  378695 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:36:01.376893  378695 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-55448/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-55448/.minikube}
	I1115 10:36:01.376932  378695 ubuntu.go:190] setting up certificates
	I1115 10:36:01.376976  378695 provision.go:84] configureAuth start
	I1115 10:36:01.377048  378695 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-086099
	I1115 10:36:01.398840  378695 provision.go:143] copyHostCerts
	I1115 10:36:01.398983  378695 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem, removing ...
	I1115 10:36:01.399002  378695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem
	I1115 10:36:01.399077  378695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem (1082 bytes)
	I1115 10:36:01.399173  378695 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem, removing ...
	I1115 10:36:01.399183  378695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem
	I1115 10:36:01.399217  378695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem (1123 bytes)
	I1115 10:36:01.399290  378695 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem, removing ...
	I1115 10:36:01.399300  378695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem
	I1115 10:36:01.399336  378695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem (1679 bytes)
	I1115 10:36:01.399416  378695 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem org=jenkins.newest-cni-086099 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-086099]
	I1115 10:36:01.599358  378695 provision.go:177] copyRemoteCerts
	I1115 10:36:01.599429  378695 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:36:01.599467  378695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:01.617920  378695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:01.714257  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1115 10:36:01.736832  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1115 10:36:01.771414  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:36:01.789744  378695 provision.go:87] duration metric: took 412.746889ms to configureAuth
	I1115 10:36:01.789780  378695 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:36:01.790004  378695 config.go:182] Loaded profile config "newest-cni-086099": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:01.790111  378695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:01.807644  378695 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:01.807895  378695 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1115 10:36:01.807913  378695 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1115 10:35:59.487887  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	W1115 10:36:01.488245  367608 node_ready.go:57] node "default-k8s-diff-port-026691" has "Ready":"False" status (will retry)
	I1115 10:36:01.988676  367608 node_ready.go:49] node "default-k8s-diff-port-026691" is "Ready"
	I1115 10:36:01.988712  367608 node_ready.go:38] duration metric: took 40.004362414s for node "default-k8s-diff-port-026691" to be "Ready" ...
	I1115 10:36:01.988728  367608 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:36:01.988785  367608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:36:02.002727  367608 api_server.go:72] duration metric: took 41.048135621s to wait for apiserver process to appear ...
	I1115 10:36:02.002761  367608 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:36:02.002786  367608 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1115 10:36:02.007061  367608 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1115 10:36:02.008035  367608 api_server.go:141] control plane version: v1.34.1
	I1115 10:36:02.008064  367608 api_server.go:131] duration metric: took 5.294787ms to wait for apiserver health ...
	I1115 10:36:02.008076  367608 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:36:02.011683  367608 system_pods.go:59] 8 kube-system pods found
	I1115 10:36:02.011713  367608 system_pods.go:61] "coredns-66bc5c9577-5q2j4" [e6c4ca54-e0fe-45ee-88a6-33bdccbb876c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:36:02.011719  367608 system_pods.go:61] "etcd-default-k8s-diff-port-026691" [f8228efa-91a3-41fb-9952-3b4063ddf162] Running
	I1115 10:36:02.011725  367608 system_pods.go:61] "kindnet-hjdrk" [9e1f7579-f5a2-44cd-b77f-71219cd8827d] Running
	I1115 10:36:02.011729  367608 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-026691" [9da86b3b-bfcd-4c40-ac86-ee9d9faeb9b0] Running
	I1115 10:36:02.011732  367608 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-026691" [23d9be87-1de1-4c38-b65e-b084cf0fed25] Running
	I1115 10:36:02.011737  367608 system_pods.go:61] "kube-proxy-c5bw5" [ee48d34b-ae60-4a03-a7bd-df76e089eebb] Running
	I1115 10:36:02.011741  367608 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-026691" [52b3711f-015c-4af6-8672-58642cbc0c16] Running
	I1115 10:36:02.011747  367608 system_pods.go:61] "storage-provisioner" [7dedf7a9-415d-4260-b225-7ca171744768] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:36:02.011757  367608 system_pods.go:74] duration metric: took 3.675183ms to wait for pod list to return data ...
	I1115 10:36:02.011767  367608 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:36:02.014095  367608 default_sa.go:45] found service account: "default"
	I1115 10:36:02.014113  367608 default_sa.go:55] duration metric: took 2.338136ms for default service account to be created ...
	I1115 10:36:02.014121  367608 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:36:02.016619  367608 system_pods.go:86] 8 kube-system pods found
	I1115 10:36:02.016644  367608 system_pods.go:89] "coredns-66bc5c9577-5q2j4" [e6c4ca54-e0fe-45ee-88a6-33bdccbb876c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:36:02.016650  367608 system_pods.go:89] "etcd-default-k8s-diff-port-026691" [f8228efa-91a3-41fb-9952-3b4063ddf162] Running
	I1115 10:36:02.016657  367608 system_pods.go:89] "kindnet-hjdrk" [9e1f7579-f5a2-44cd-b77f-71219cd8827d] Running
	I1115 10:36:02.016663  367608 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-026691" [9da86b3b-bfcd-4c40-ac86-ee9d9faeb9b0] Running
	I1115 10:36:02.016668  367608 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-026691" [23d9be87-1de1-4c38-b65e-b084cf0fed25] Running
	I1115 10:36:02.016676  367608 system_pods.go:89] "kube-proxy-c5bw5" [ee48d34b-ae60-4a03-a7bd-df76e089eebb] Running
	I1115 10:36:02.016681  367608 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-026691" [52b3711f-015c-4af6-8672-58642cbc0c16] Running
	I1115 10:36:02.016692  367608 system_pods.go:89] "storage-provisioner" [7dedf7a9-415d-4260-b225-7ca171744768] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:36:02.016714  367608 retry.go:31] will retry after 218.810216ms: missing components: kube-dns
	I1115 10:36:02.239606  367608 system_pods.go:86] 8 kube-system pods found
	I1115 10:36:02.239636  367608 system_pods.go:89] "coredns-66bc5c9577-5q2j4" [e6c4ca54-e0fe-45ee-88a6-33bdccbb876c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:36:02.239642  367608 system_pods.go:89] "etcd-default-k8s-diff-port-026691" [f8228efa-91a3-41fb-9952-3b4063ddf162] Running
	I1115 10:36:02.239648  367608 system_pods.go:89] "kindnet-hjdrk" [9e1f7579-f5a2-44cd-b77f-71219cd8827d] Running
	I1115 10:36:02.239654  367608 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-026691" [9da86b3b-bfcd-4c40-ac86-ee9d9faeb9b0] Running
	I1115 10:36:02.239657  367608 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-026691" [23d9be87-1de1-4c38-b65e-b084cf0fed25] Running
	I1115 10:36:02.239661  367608 system_pods.go:89] "kube-proxy-c5bw5" [ee48d34b-ae60-4a03-a7bd-df76e089eebb] Running
	I1115 10:36:02.239665  367608 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-026691" [52b3711f-015c-4af6-8672-58642cbc0c16] Running
	I1115 10:36:02.239671  367608 system_pods.go:89] "storage-provisioner" [7dedf7a9-415d-4260-b225-7ca171744768] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:36:02.239689  367608 retry.go:31] will retry after 377.391978ms: missing components: kube-dns
	I1115 10:35:59.653179  377744 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1115 10:35:59.653211  377744 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1115 10:35:59.670277  377744 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1115 10:35:59.670303  377744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1115 10:35:59.757741  377744 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1115 10:35:59.757796  377744 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1115 10:35:59.771666  377744 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1115 10:35:59.771696  377744 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1115 10:35:59.844282  377744 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1115 10:35:59.844312  377744 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1115 10:35:59.859695  377744 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1115 10:35:59.859723  377744 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1115 10:35:59.873202  377744 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:35:59.873227  377744 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1115 10:35:59.887124  377744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:36:03.675772  377744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.21470192s)
	I1115 10:36:03.675861  377744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.214437385s)
	I1115 10:36:03.675941  377744 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.125882332s)
	I1115 10:36:03.676037  377744 node_ready.go:35] waiting up to 6m0s for node "embed-certs-719574" to be "Ready" ...
	I1115 10:36:03.676084  377744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.788916637s)
	I1115 10:36:03.677758  377744 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-719574 addons enable metrics-server
	
	I1115 10:36:03.686848  377744 node_ready.go:49] node "embed-certs-719574" is "Ready"
	I1115 10:36:03.686872  377744 node_ready.go:38] duration metric: took 10.779527ms for node "embed-certs-719574" to be "Ready" ...
	I1115 10:36:03.686888  377744 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:36:03.686937  377744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:36:03.688770  377744 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1115 10:36:02.108071  378695 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:36:02.108099  378695 machine.go:97] duration metric: took 4.215497724s to provisionDockerMachine
	I1115 10:36:02.108110  378695 client.go:176] duration metric: took 10.030938427s to LocalClient.Create
	I1115 10:36:02.108130  378695 start.go:167] duration metric: took 10.030994703s to libmachine.API.Create "newest-cni-086099"
	I1115 10:36:02.108137  378695 start.go:293] postStartSetup for "newest-cni-086099" (driver="docker")
	I1115 10:36:02.108146  378695 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:36:02.108214  378695 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:36:02.108252  378695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:02.126898  378695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:02.234226  378695 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:36:02.237991  378695 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:36:02.238025  378695 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:36:02.238037  378695 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/addons for local assets ...
	I1115 10:36:02.238104  378695 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/files for local assets ...
	I1115 10:36:02.238204  378695 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem -> 589622.pem in /etc/ssl/certs
	I1115 10:36:02.238321  378695 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:36:02.249461  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:36:02.279024  378695 start.go:296] duration metric: took 170.869278ms for postStartSetup
	I1115 10:36:02.279408  378695 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-086099
	I1115 10:36:02.299580  378695 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/config.json ...
	I1115 10:36:02.299869  378695 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:36:02.299927  378695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:02.318249  378695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:02.419697  378695 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:36:02.424780  378695 start.go:128] duration metric: took 10.349732709s to createHost
	I1115 10:36:02.424816  378695 start.go:83] releasing machines lock for "newest-cni-086099", held for 10.349888861s
	I1115 10:36:02.424894  378695 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-086099
	I1115 10:36:02.442707  378695 ssh_runner.go:195] Run: cat /version.json
	I1115 10:36:02.442769  378695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:02.442774  378695 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:36:02.442838  378695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:02.475405  378695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:02.476482  378695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:02.627684  378695 ssh_runner.go:195] Run: systemctl --version
	I1115 10:36:02.635318  378695 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:36:02.690380  378695 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:36:02.695343  378695 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:36:02.695404  378695 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:36:02.723025  378695 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1115 10:36:02.723047  378695 start.go:496] detecting cgroup driver to use...
	I1115 10:36:02.723077  378695 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:36:02.723116  378695 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:36:02.740027  378695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:36:02.757082  378695 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:36:02.757147  378695 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:36:02.780790  378695 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:36:02.800005  378695 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:36:02.903918  378695 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:36:03.008676  378695 docker.go:234] disabling docker service ...
	I1115 10:36:03.008735  378695 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:36:03.029417  378695 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:36:03.042351  378695 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:36:03.141887  378695 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:36:03.242543  378695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:36:03.261558  378695 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:36:03.281222  378695 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:36:03.281289  378695 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:03.292850  378695 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:36:03.292913  378695 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:03.302308  378695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:03.312080  378695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:03.321520  378695 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:36:03.330371  378695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:03.339342  378695 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:03.358403  378695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:03.370875  378695 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:36:03.382720  378695 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:36:03.392373  378695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:03.490238  378695 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:36:03.612676  378695 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:36:03.612751  378695 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:36:03.616844  378695 start.go:564] Will wait 60s for crictl version
	I1115 10:36:03.616906  378695 ssh_runner.go:195] Run: which crictl
	I1115 10:36:03.620519  378695 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:36:03.647994  378695 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:36:03.648098  378695 ssh_runner.go:195] Run: crio --version
	I1115 10:36:03.681466  378695 ssh_runner.go:195] Run: crio --version
	I1115 10:36:03.715909  378695 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:36:03.717677  378695 cli_runner.go:164] Run: docker network inspect newest-cni-086099 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:36:03.737236  378695 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1115 10:36:03.741562  378695 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:36:03.754243  378695 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1115 10:36:02.621370  367608 system_pods.go:86] 8 kube-system pods found
	I1115 10:36:02.621401  367608 system_pods.go:89] "coredns-66bc5c9577-5q2j4" [e6c4ca54-e0fe-45ee-88a6-33bdccbb876c] Running
	I1115 10:36:02.621407  367608 system_pods.go:89] "etcd-default-k8s-diff-port-026691" [f8228efa-91a3-41fb-9952-3b4063ddf162] Running
	I1115 10:36:02.621412  367608 system_pods.go:89] "kindnet-hjdrk" [9e1f7579-f5a2-44cd-b77f-71219cd8827d] Running
	I1115 10:36:02.621416  367608 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-026691" [9da86b3b-bfcd-4c40-ac86-ee9d9faeb9b0] Running
	I1115 10:36:02.621421  367608 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-026691" [23d9be87-1de1-4c38-b65e-b084cf0fed25] Running
	I1115 10:36:02.621424  367608 system_pods.go:89] "kube-proxy-c5bw5" [ee48d34b-ae60-4a03-a7bd-df76e089eebb] Running
	I1115 10:36:02.621428  367608 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-026691" [52b3711f-015c-4af6-8672-58642cbc0c16] Running
	I1115 10:36:02.621431  367608 system_pods.go:89] "storage-provisioner" [7dedf7a9-415d-4260-b225-7ca171744768] Running
	I1115 10:36:02.621439  367608 system_pods.go:126] duration metric: took 607.311685ms to wait for k8s-apps to be running ...
	I1115 10:36:02.621445  367608 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:36:02.621494  367608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:36:02.636245  367608 system_svc.go:56] duration metric: took 14.790396ms WaitForService to wait for kubelet
	I1115 10:36:02.636277  367608 kubeadm.go:587] duration metric: took 41.681692299s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:36:02.636317  367608 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:36:02.639743  367608 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:36:02.639770  367608 node_conditions.go:123] node cpu capacity is 8
	I1115 10:36:02.639786  367608 node_conditions.go:105] duration metric: took 3.46192ms to run NodePressure ...
	I1115 10:36:02.639802  367608 start.go:242] waiting for startup goroutines ...
	I1115 10:36:02.639815  367608 start.go:247] waiting for cluster config update ...
	I1115 10:36:02.639834  367608 start.go:256] writing updated cluster config ...
	I1115 10:36:02.640167  367608 ssh_runner.go:195] Run: rm -f paused
	I1115 10:36:02.644506  367608 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:36:02.649994  367608 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5q2j4" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:02.656679  367608 pod_ready.go:94] pod "coredns-66bc5c9577-5q2j4" is "Ready"
	I1115 10:36:02.656844  367608 pod_ready.go:86] duration metric: took 6.756741ms for pod "coredns-66bc5c9577-5q2j4" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:02.659798  367608 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:02.665415  367608 pod_ready.go:94] pod "etcd-default-k8s-diff-port-026691" is "Ready"
	I1115 10:36:02.665516  367608 pod_ready.go:86] duration metric: took 5.656754ms for pod "etcd-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:02.669115  367608 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:02.675621  367608 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-026691" is "Ready"
	I1115 10:36:02.675649  367608 pod_ready.go:86] duration metric: took 6.472611ms for pod "kube-apiserver-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:02.678236  367608 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:03.050408  367608 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-026691" is "Ready"
	I1115 10:36:03.050447  367608 pod_ready.go:86] duration metric: took 372.139168ms for pod "kube-controller-manager-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:03.250079  367608 pod_ready.go:83] waiting for pod "kube-proxy-c5bw5" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:03.649856  367608 pod_ready.go:94] pod "kube-proxy-c5bw5" is "Ready"
	I1115 10:36:03.649889  367608 pod_ready.go:86] duration metric: took 399.777083ms for pod "kube-proxy-c5bw5" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:03.850318  367608 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:04.249888  367608 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-026691" is "Ready"
	I1115 10:36:04.249914  367608 pod_ready.go:86] duration metric: took 399.564892ms for pod "kube-scheduler-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:04.249926  367608 pod_ready.go:40] duration metric: took 1.605379763s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:36:04.304218  367608 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:36:04.306183  367608 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-026691" cluster and "default" namespace by default
	I1115 10:36:03.689851  377744 addons.go:515] duration metric: took 4.61797682s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1115 10:36:03.700992  377744 api_server.go:72] duration metric: took 4.62919911s to wait for apiserver process to appear ...
	I1115 10:36:03.701014  377744 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:36:03.701034  377744 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1115 10:36:03.705295  377744 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1115 10:36:03.706367  377744 api_server.go:141] control plane version: v1.34.1
	I1115 10:36:03.706398  377744 api_server.go:131] duration metric: took 5.374158ms to wait for apiserver health ...
	I1115 10:36:03.706409  377744 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:36:03.710047  377744 system_pods.go:59] 8 kube-system pods found
	I1115 10:36:03.710083  377744 system_pods.go:61] "coredns-66bc5c9577-fjzk5" [d4d185bc-88ec-4edb-b250-6a59ee426bf5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:36:03.710095  377744 system_pods.go:61] "etcd-embed-certs-719574" [293cb467-4f15-417b-944e-457c3ac7e56f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:36:03.710106  377744 system_pods.go:61] "kindnet-ql2r4" [224e2951-6c97-449d-8ff8-f72aa6d36d60] Running
	I1115 10:36:03.710122  377744 system_pods.go:61] "kube-apiserver-embed-certs-719574" [43a83385-040c-470f-8242-5066617ac8d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:36:03.710135  377744 system_pods.go:61] "kube-controller-manager-embed-certs-719574" [78f16cd1-774e-4556-9db6-bca54bc8214b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:36:03.710141  377744 system_pods.go:61] "kube-proxy-kmc8c" [3534c76c-4b99-4b84-ba00-21d0d49e770f] Running
	I1115 10:36:03.710147  377744 system_pods.go:61] "kube-scheduler-embed-certs-719574" [9b8d2d78-442d-48cc-88be-29b9eb7017e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:36:03.710158  377744 system_pods.go:61] "storage-provisioner" [39c3baf2-24de-475e-aeef-a10825991ca3] Running
	I1115 10:36:03.710165  377744 system_pods.go:74] duration metric: took 3.749108ms to wait for pod list to return data ...
	I1115 10:36:03.710174  377744 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:36:03.712493  377744 default_sa.go:45] found service account: "default"
	I1115 10:36:03.712513  377744 default_sa.go:55] duration metric: took 2.331314ms for default service account to be created ...
	I1115 10:36:03.712522  377744 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:36:03.715355  377744 system_pods.go:86] 8 kube-system pods found
	I1115 10:36:03.715378  377744 system_pods.go:89] "coredns-66bc5c9577-fjzk5" [d4d185bc-88ec-4edb-b250-6a59ee426bf5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:36:03.715386  377744 system_pods.go:89] "etcd-embed-certs-719574" [293cb467-4f15-417b-944e-457c3ac7e56f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:36:03.715391  377744 system_pods.go:89] "kindnet-ql2r4" [224e2951-6c97-449d-8ff8-f72aa6d36d60] Running
	I1115 10:36:03.715398  377744 system_pods.go:89] "kube-apiserver-embed-certs-719574" [43a83385-040c-470f-8242-5066617ac8d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:36:03.715405  377744 system_pods.go:89] "kube-controller-manager-embed-certs-719574" [78f16cd1-774e-4556-9db6-bca54bc8214b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:36:03.715412  377744 system_pods.go:89] "kube-proxy-kmc8c" [3534c76c-4b99-4b84-ba00-21d0d49e770f] Running
	I1115 10:36:03.715417  377744 system_pods.go:89] "kube-scheduler-embed-certs-719574" [9b8d2d78-442d-48cc-88be-29b9eb7017e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:36:03.715427  377744 system_pods.go:89] "storage-provisioner" [39c3baf2-24de-475e-aeef-a10825991ca3] Running
	I1115 10:36:03.715435  377744 system_pods.go:126] duration metric: took 2.908753ms to wait for k8s-apps to be running ...
	I1115 10:36:03.715443  377744 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:36:03.715482  377744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:36:03.729079  377744 system_svc.go:56] duration metric: took 13.624714ms WaitForService to wait for kubelet
	I1115 10:36:03.729108  377744 kubeadm.go:587] duration metric: took 4.657317817s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:36:03.729130  377744 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:36:03.732380  377744 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:36:03.732409  377744 node_conditions.go:123] node cpu capacity is 8
	I1115 10:36:03.732424  377744 node_conditions.go:105] duration metric: took 3.288836ms to run NodePressure ...
	I1115 10:36:03.732439  377744 start.go:242] waiting for startup goroutines ...
	I1115 10:36:03.732448  377744 start.go:247] waiting for cluster config update ...
	I1115 10:36:03.732463  377744 start.go:256] writing updated cluster config ...
	I1115 10:36:03.732754  377744 ssh_runner.go:195] Run: rm -f paused
	I1115 10:36:03.737164  377744 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:36:03.740586  377744 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fjzk5" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:03.755299  378695 kubeadm.go:884] updating cluster {Name:newest-cni-086099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-086099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:36:03.755432  378695 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:36:03.755482  378695 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:36:03.794722  378695 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:36:03.794749  378695 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:36:03.794805  378695 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:36:03.826109  378695 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:36:03.826142  378695 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:36:03.826153  378695 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1115 10:36:03.826264  378695 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-086099 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-086099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:36:03.826354  378695 ssh_runner.go:195] Run: crio config
	I1115 10:36:03.879671  378695 cni.go:84] Creating CNI manager for ""
	I1115 10:36:03.879701  378695 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:36:03.879717  378695 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1115 10:36:03.879739  378695 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-086099 NodeName:newest-cni-086099 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:36:03.879883  378695 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-086099"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:36:03.879988  378695 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:36:03.888992  378695 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:36:03.889052  378695 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:36:03.897294  378695 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1115 10:36:03.911151  378695 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:36:03.930297  378695 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1115 10:36:03.945072  378695 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:36:03.948706  378695 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:36:03.959243  378695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:04.058938  378695 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:36:04.093857  378695 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099 for IP: 192.168.103.2
	I1115 10:36:04.093888  378695 certs.go:195] generating shared ca certs ...
	I1115 10:36:04.093909  378695 certs.go:227] acquiring lock for ca certs: {Name:mkec8c0ab2b564f4fafe1f7fc86089efa994f6db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:04.094076  378695 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key
	I1115 10:36:04.094148  378695 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key
	I1115 10:36:04.094163  378695 certs.go:257] generating profile certs ...
	I1115 10:36:04.094230  378695 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/client.key
	I1115 10:36:04.094258  378695 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/client.crt with IP's: []
	I1115 10:36:04.385453  378695 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/client.crt ...
	I1115 10:36:04.385478  378695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/client.crt: {Name:mk40f6a053043aca087e720d3a4da44f4215e456 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:04.385623  378695 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/client.key ...
	I1115 10:36:04.385633  378695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/client.key: {Name:mk7ba7a9aed87498b12d0ea82f1fd16a2802adbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:04.385729  378695 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.key.d719cdad
	I1115 10:36:04.385749  378695 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.crt.d719cdad with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1115 10:36:04.782829  378695 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.crt.d719cdad ...
	I1115 10:36:04.782863  378695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.crt.d719cdad: {Name:mkcdec4fb6d5949c6190ac10a0f9caeb369ef1df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:04.783103  378695 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.key.d719cdad ...
	I1115 10:36:04.783129  378695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.key.d719cdad: {Name:mk74203e2c301a3a488fc95324a401039fa8106d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:04.783253  378695 certs.go:382] copying /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.crt.d719cdad -> /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.crt
	I1115 10:36:04.783373  378695 certs.go:386] copying /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.key.d719cdad -> /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.key
	I1115 10:36:04.783463  378695 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.key
	I1115 10:36:04.783486  378695 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.crt with IP's: []
	I1115 10:36:04.900301  378695 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.crt ...
	I1115 10:36:04.900329  378695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.crt: {Name:mk0d5b4842614d84db6a4d32b9e40b0ee2961026 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:04.900527  378695 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.key ...
	I1115 10:36:04.900547  378695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.key: {Name:mkc0cf01fd3204cf2eb33c45d49bdb1a3af7d389 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:04.900769  378695 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem (1338 bytes)
	W1115 10:36:04.900806  378695 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962_empty.pem, impossibly tiny 0 bytes
	I1115 10:36:04.900817  378695 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:36:04.900837  378695 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:36:04.900863  378695 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:36:04.900884  378695 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem (1679 bytes)
	I1115 10:36:04.900931  378695 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:36:04.901498  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:36:04.920490  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:36:04.938524  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:36:04.956167  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:36:04.974935  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1115 10:36:04.995270  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 10:36:05.016110  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:36:05.034440  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:36:05.051948  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem --> /usr/share/ca-certificates/58962.pem (1338 bytes)
	I1115 10:36:05.071136  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /usr/share/ca-certificates/589622.pem (1708 bytes)
	I1115 10:36:05.100067  378695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:36:05.120144  378695 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:36:05.133751  378695 ssh_runner.go:195] Run: openssl version
	I1115 10:36:05.140442  378695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:36:05.150520  378695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:05.155339  378695 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:41 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:05.155411  378695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:05.205520  378695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:36:05.214306  378695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/58962.pem && ln -fs /usr/share/ca-certificates/58962.pem /etc/ssl/certs/58962.pem"
	I1115 10:36:05.222589  378695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/58962.pem
	I1115 10:36:05.226661  378695 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:47 /usr/share/ca-certificates/58962.pem
	I1115 10:36:05.226723  378695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/58962.pem
	I1115 10:36:05.269094  378695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/58962.pem /etc/ssl/certs/51391683.0"
	I1115 10:36:05.282750  378695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/589622.pem && ln -fs /usr/share/ca-certificates/589622.pem /etc/ssl/certs/589622.pem"
	I1115 10:36:05.291785  378695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/589622.pem
	I1115 10:36:05.295742  378695 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:47 /usr/share/ca-certificates/589622.pem
	I1115 10:36:05.295801  378695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/589622.pem
	I1115 10:36:05.341059  378695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/589622.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:36:05.352931  378695 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:36:05.357729  378695 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 10:36:05.357794  378695 kubeadm.go:401] StartCluster: {Name:newest-cni-086099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-086099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:36:05.357898  378695 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:36:05.358038  378695 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:36:05.389342  378695 cri.go:89] found id: ""
	I1115 10:36:05.389409  378695 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:36:05.399176  378695 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 10:36:05.407568  378695 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 10:36:05.407619  378695 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 10:36:05.415732  378695 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 10:36:05.415750  378695 kubeadm.go:158] found existing configuration files:
	
	I1115 10:36:05.415789  378695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 10:36:05.423933  378695 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 10:36:05.424003  378695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 10:36:05.431425  378695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 10:36:05.439333  378695 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 10:36:05.439396  378695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 10:36:05.446777  378695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 10:36:05.454437  378695 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 10:36:05.454481  378695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 10:36:05.461644  378695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 10:36:05.468875  378695 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 10:36:05.468937  378695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 10:36:05.476821  378695 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 10:36:05.516431  378695 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 10:36:05.516536  378695 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 10:36:05.536153  378695 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 10:36:05.536251  378695 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1115 10:36:05.536322  378695 kubeadm.go:319] OS: Linux
	I1115 10:36:05.536373  378695 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 10:36:05.536430  378695 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1115 10:36:05.536519  378695 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 10:36:05.536598  378695 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 10:36:05.536682  378695 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 10:36:05.536769  378695 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 10:36:05.536832  378695 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 10:36:05.536877  378695 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 10:36:05.536920  378695 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1115 10:36:05.598690  378695 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 10:36:05.598871  378695 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 10:36:05.599041  378695 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 10:36:05.606076  378695 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1115 10:36:05.608588  378695 out.go:252]   - Generating certificates and keys ...
	I1115 10:36:05.608685  378695 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 10:36:05.608773  378695 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 10:36:06.648403  378695 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 10:36:06.817549  378695 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	W1115 10:36:05.746906  377744 pod_ready.go:104] pod "coredns-66bc5c9577-fjzk5" is not "Ready", error: <nil>
	W1115 10:36:07.750717  377744 pod_ready.go:104] pod "coredns-66bc5c9577-fjzk5" is not "Ready", error: <nil>
	I1115 10:36:07.421389  378695 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 10:36:07.530169  378695 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 10:36:07.661595  378695 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 10:36:07.661935  378695 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-086099] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1115 10:36:07.815844  378695 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 10:36:07.815984  378695 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-086099] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1115 10:36:08.340480  378695 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 10:36:08.581150  378695 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 10:36:08.685187  378695 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 10:36:08.685316  378695 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 10:36:09.142759  378695 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 10:36:09.525800  378695 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 10:36:10.064453  378695 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 10:36:10.611944  378695 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 10:36:10.725282  378695 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 10:36:10.726089  378695 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 10:36:10.732368  378695 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1115 10:36:10.733696  378695 out.go:252]   - Booting up control plane ...
	I1115 10:36:10.733914  378695 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 10:36:10.734036  378695 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 10:36:10.734647  378695 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 10:36:10.751182  378695 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 10:36:10.751353  378695 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1115 10:36:10.758855  378695 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1115 10:36:10.759149  378695 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 10:36:10.759248  378695 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 10:36:10.861925  378695 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1115 10:36:10.862096  378695 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1115 10:36:11.863287  378695 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00142158s
	I1115 10:36:11.866873  378695 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 10:36:11.867055  378695 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1115 10:36:11.867227  378695 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 10:36:11.867334  378695 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1115 10:36:10.247511  377744 pod_ready.go:104] pod "coredns-66bc5c9577-fjzk5" is not "Ready", error: <nil>
	W1115 10:36:12.252752  377744 pod_ready.go:104] pod "coredns-66bc5c9577-fjzk5" is not "Ready", error: <nil>
	W1115 10:36:14.260890  377744 pod_ready.go:104] pod "coredns-66bc5c9577-fjzk5" is not "Ready", error: <nil>
	I1115 10:36:15.556581  378695 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.689495085s
	I1115 10:36:15.951246  378695 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.084319454s
	I1115 10:36:17.869572  378695 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001723654s
	I1115 10:36:17.885461  378695 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 10:36:17.896566  378695 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 10:36:17.907261  378695 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 10:36:17.907531  378695 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-086099 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 10:36:17.928555  378695 kubeadm.go:319] [bootstrap-token] Using token: mb3kq6.gnhvb4w2eo34g6rt
	I1115 10:36:17.929717  378695 out.go:252]   - Configuring RBAC rules ...
	I1115 10:36:17.929866  378695 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 10:36:17.935138  378695 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 10:36:17.941433  378695 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 10:36:17.944323  378695 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 10:36:17.950571  378695 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 10:36:17.953567  378695 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 10:36:18.276683  378695 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 10:36:18.694729  378695 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 10:36:19.323365  378695 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 10:36:19.324421  378695 kubeadm.go:319] 
	I1115 10:36:19.324521  378695 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 10:36:19.324553  378695 kubeadm.go:319] 
	I1115 10:36:19.324691  378695 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 10:36:19.324704  378695 kubeadm.go:319] 
	I1115 10:36:19.324736  378695 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 10:36:19.324824  378695 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 10:36:19.324903  378695 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 10:36:19.324915  378695 kubeadm.go:319] 
	I1115 10:36:19.325027  378695 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 10:36:19.325044  378695 kubeadm.go:319] 
	I1115 10:36:19.325109  378695 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 10:36:19.325119  378695 kubeadm.go:319] 
	I1115 10:36:19.325199  378695 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 10:36:19.325318  378695 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 10:36:19.325427  378695 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 10:36:19.325437  378695 kubeadm.go:319] 
	I1115 10:36:19.325540  378695 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 10:36:19.325661  378695 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 10:36:19.325675  378695 kubeadm.go:319] 
	I1115 10:36:19.325800  378695 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token mb3kq6.gnhvb4w2eo34g6rt \
	I1115 10:36:19.325986  378695 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c958101d3a67868123c5e439a6c1ea6e30a99a6538b76cbe461e2c919cf1aec \
	I1115 10:36:19.326028  378695 kubeadm.go:319] 	--control-plane 
	I1115 10:36:19.326040  378695 kubeadm.go:319] 
	I1115 10:36:19.326135  378695 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 10:36:19.326146  378695 kubeadm.go:319] 
	I1115 10:36:19.326285  378695 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token mb3kq6.gnhvb4w2eo34g6rt \
	I1115 10:36:19.326413  378695 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c958101d3a67868123c5e439a6c1ea6e30a99a6538b76cbe461e2c919cf1aec 
	I1115 10:36:19.329000  378695 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1115 10:36:19.329189  378695 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1115 10:36:19.329285  378695 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 10:36:19.329320  378695 cni.go:84] Creating CNI manager for ""
	I1115 10:36:19.329338  378695 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:36:19.373881  378695 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1115 10:36:16.745628  377744 pod_ready.go:104] pod "coredns-66bc5c9577-fjzk5" is not "Ready", error: <nil>
	W1115 10:36:18.747564  377744 pod_ready.go:104] pod "coredns-66bc5c9577-fjzk5" is not "Ready", error: <nil>
	I1115 10:36:19.401604  378695 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 10:36:19.405969  378695 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1115 10:36:19.405991  378695 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 10:36:19.419562  378695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 10:36:19.871714  378695 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 10:36:19.871805  378695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:36:19.871833  378695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-086099 minikube.k8s.io/updated_at=2025_11_15T10_36_19_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510 minikube.k8s.io/name=newest-cni-086099 minikube.k8s.io/primary=true
	I1115 10:36:19.884825  378695 ops.go:34] apiserver oom_adj: -16
	I1115 10:36:19.969378  378695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:36:20.470165  378695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:36:20.969600  378695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:36:21.470378  378695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:36:21.969675  378695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:36:22.470338  378695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:36:22.969563  378695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:36:23.469649  378695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:36:23.969861  378695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:36:24.034912  378695 kubeadm.go:1114] duration metric: took 4.163172521s to wait for elevateKubeSystemPrivileges
	I1115 10:36:24.034947  378695 kubeadm.go:403] duration metric: took 18.677158066s to StartCluster
	I1115 10:36:24.034983  378695 settings.go:142] acquiring lock: {Name:mk94cfac0f6eef2479181468a7a8082a6e5c8f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:24.035062  378695 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:36:24.036655  378695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:24.036946  378695 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 10:36:24.036974  378695 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:36:24.037059  378695 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:36:24.037170  378695 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-086099"
	I1115 10:36:24.037190  378695 addons.go:70] Setting default-storageclass=true in profile "newest-cni-086099"
	I1115 10:36:24.037204  378695 config.go:182] Loaded profile config "newest-cni-086099": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:24.037212  378695 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-086099"
	I1115 10:36:24.037228  378695 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-086099"
	I1115 10:36:24.037244  378695 host.go:66] Checking if "newest-cni-086099" exists ...
	I1115 10:36:24.037607  378695 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:36:24.037748  378695 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:36:24.038613  378695 out.go:179] * Verifying Kubernetes components...
	I1115 10:36:24.041602  378695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:24.065552  378695 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1115 10:36:21.246632  377744 pod_ready.go:104] pod "coredns-66bc5c9577-fjzk5" is not "Ready", error: <nil>
	W1115 10:36:23.746379  377744 pod_ready.go:104] pod "coredns-66bc5c9577-fjzk5" is not "Ready", error: <nil>
	I1115 10:36:24.066134  378695 addons.go:239] Setting addon default-storageclass=true in "newest-cni-086099"
	I1115 10:36:24.066180  378695 host.go:66] Checking if "newest-cni-086099" exists ...
	I1115 10:36:24.066682  378695 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:36:24.068197  378695 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:36:24.068221  378695 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:36:24.068278  378695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:24.091131  378695 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:36:24.091161  378695 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:36:24.091235  378695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:24.091223  378695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:24.109995  378695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:24.248336  378695 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 10:36:24.327705  378695 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:36:24.342326  378695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:36:24.342664  378695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:36:24.746481  378695 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1115 10:36:24.747821  378695 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:36:24.747896  378695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:36:25.058451  378695 api_server.go:72] duration metric: took 1.021431031s to wait for apiserver process to appear ...
	I1115 10:36:25.058524  378695 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:36:25.058555  378695 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1115 10:36:25.059948  378695 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1115 10:36:25.061128  378695 addons.go:515] duration metric: took 1.024065508s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1115 10:36:25.063739  378695 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1115 10:36:25.064587  378695 api_server.go:141] control plane version: v1.34.1
	I1115 10:36:25.064611  378695 api_server.go:131] duration metric: took 6.076268ms to wait for apiserver health ...
	I1115 10:36:25.064620  378695 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:36:25.067456  378695 system_pods.go:59] 8 kube-system pods found
	I1115 10:36:25.067483  378695 system_pods.go:61] "coredns-66bc5c9577-rblh2" [903029e0-3b15-43f3-836a-884de528cbc9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1115 10:36:25.067488  378695 system_pods.go:61] "etcd-newest-cni-086099" [6768a007-08a6-47b0-9917-cf54f577829b] Running
	I1115 10:36:25.067495  378695 system_pods.go:61] "kindnet-2h7mm" [1b25f4e6-5f26-42ce-8ceb-56003682c785] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1115 10:36:25.067501  378695 system_pods.go:61] "kube-apiserver-newest-cni-086099" [3ca22829-f679-44bf-94e5-e4a368e13dcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:36:25.067504  378695 system_pods.go:61] "kube-controller-manager-newest-cni-086099" [1f45f32a-2d9e-49c0-9c69-d2aa59324564] Running
	I1115 10:36:25.067510  378695 system_pods.go:61] "kube-proxy-6jpzt" [7409c19f-472b-4074-81d0-8e43ac2bc9d4] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1115 10:36:25.067518  378695 system_pods.go:61] "kube-scheduler-newest-cni-086099" [c3510e0f-9b51-4fb5-bc6e-d0e47be8f5ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:36:25.067522  378695 system_pods.go:61] "storage-provisioner" [23166a3f-bb02-48ca-ab00-721c8c46525d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1115 10:36:25.067528  378695 system_pods.go:74] duration metric: took 2.902785ms to wait for pod list to return data ...
	I1115 10:36:25.067538  378695 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:36:25.069657  378695 default_sa.go:45] found service account: "default"
	I1115 10:36:25.069673  378695 default_sa.go:55] duration metric: took 2.130261ms for default service account to be created ...
	I1115 10:36:25.069685  378695 kubeadm.go:587] duration metric: took 1.032677376s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1115 10:36:25.069699  378695 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:36:25.071764  378695 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:36:25.071788  378695 node_conditions.go:123] node cpu capacity is 8
	I1115 10:36:25.071803  378695 node_conditions.go:105] duration metric: took 2.099024ms to run NodePressure ...
	I1115 10:36:25.071818  378695 start.go:242] waiting for startup goroutines ...
	I1115 10:36:25.251435  378695 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-086099" context rescaled to 1 replicas
	I1115 10:36:25.251487  378695 start.go:247] waiting for cluster config update ...
	I1115 10:36:25.251504  378695 start.go:256] writing updated cluster config ...
	I1115 10:36:25.251841  378695 ssh_runner.go:195] Run: rm -f paused
	I1115 10:36:25.301550  378695 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:36:25.303359  378695 out.go:179] * Done! kubectl is now configured to use "newest-cni-086099" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 15 10:36:24 newest-cni-086099 crio[894]: time="2025-11-15T10:36:24.530677725Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:24 newest-cni-086099 crio[894]: time="2025-11-15T10:36:24.532427251Z" level=info msg="Running pod sandbox: kube-system/kindnet-2h7mm/POD" id=4a58470f-012f-47a9-b2de-ce83b635172b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:36:24 newest-cni-086099 crio[894]: time="2025-11-15T10:36:24.532497569Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:24 newest-cni-086099 crio[894]: time="2025-11-15T10:36:24.534320478Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=1ec9a1ff-5f4c-4928-ad82-86f461ccf251 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:36:24 newest-cni-086099 crio[894]: time="2025-11-15T10:36:24.538164522Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=4a58470f-012f-47a9-b2de-ce83b635172b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:36:24 newest-cni-086099 crio[894]: time="2025-11-15T10:36:24.540034102Z" level=info msg="Ran pod sandbox 4e8bf51611a75c1e12a8f147232a7b14baa4815a472bf4d2916e879b9c5f0ff8 with infra container: kube-system/kube-proxy-6jpzt/POD" id=1ec9a1ff-5f4c-4928-ad82-86f461ccf251 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:36:24 newest-cni-086099 crio[894]: time="2025-11-15T10:36:24.542139414Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=4900a884-e044-446b-8258-c0bd92073e6f name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:36:24 newest-cni-086099 crio[894]: time="2025-11-15T10:36:24.54271666Z" level=info msg="Ran pod sandbox 501d631c39093d8bdbe0ac12f14195c7454b04430daef50c91a0e51ea9fbd92e with infra container: kube-system/kindnet-2h7mm/POD" id=4a58470f-012f-47a9-b2de-ce83b635172b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:36:24 newest-cni-086099 crio[894]: time="2025-11-15T10:36:24.544609583Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=a95299f7-b151-4f50-8610-0075eeb77349 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:36:24 newest-cni-086099 crio[894]: time="2025-11-15T10:36:24.544633271Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=99e2ad56-5277-42c4-afe0-8f8cc3e8ff2d name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:36:24 newest-cni-086099 crio[894]: time="2025-11-15T10:36:24.545686697Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=26f6c542-ae11-4fef-83a2-762a33fa07a5 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:36:24 newest-cni-086099 crio[894]: time="2025-11-15T10:36:24.551080665Z" level=info msg="Creating container: kube-system/kube-proxy-6jpzt/kube-proxy" id=de9f8f18-126d-4bfc-aaf7-0ce00b50b351 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:36:24 newest-cni-086099 crio[894]: time="2025-11-15T10:36:24.551744511Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:24 newest-cni-086099 crio[894]: time="2025-11-15T10:36:24.555295019Z" level=info msg="Creating container: kube-system/kindnet-2h7mm/kindnet-cni" id=c0e72b53-d7cc-4826-8661-de4bae3bab52 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:36:24 newest-cni-086099 crio[894]: time="2025-11-15T10:36:24.556003778Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:24 newest-cni-086099 crio[894]: time="2025-11-15T10:36:24.558613953Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:24 newest-cni-086099 crio[894]: time="2025-11-15T10:36:24.62741824Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:24 newest-cni-086099 crio[894]: time="2025-11-15T10:36:24.630060948Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:24 newest-cni-086099 crio[894]: time="2025-11-15T10:36:24.630645632Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:24 newest-cni-086099 crio[894]: time="2025-11-15T10:36:24.655834452Z" level=info msg="Created container b598333c22411ef046967f2c2bc0e28cd2ba0f659400c9b5dff6f20e159f7b74: kube-system/kindnet-2h7mm/kindnet-cni" id=c0e72b53-d7cc-4826-8661-de4bae3bab52 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:36:24 newest-cni-086099 crio[894]: time="2025-11-15T10:36:24.727772003Z" level=info msg="Starting container: b598333c22411ef046967f2c2bc0e28cd2ba0f659400c9b5dff6f20e159f7b74" id=2d75ef78-e1f0-4bee-8ad4-ecf390b221de name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:36:24 newest-cni-086099 crio[894]: time="2025-11-15T10:36:24.731027722Z" level=info msg="Created container 7a9a8c7d08d9929774aba6fcc959a3367a10380fff7a54a32d207174c420ee96: kube-system/kube-proxy-6jpzt/kube-proxy" id=de9f8f18-126d-4bfc-aaf7-0ce00b50b351 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:36:24 newest-cni-086099 crio[894]: time="2025-11-15T10:36:24.731580545Z" level=info msg="Started container" PID=1732 containerID=b598333c22411ef046967f2c2bc0e28cd2ba0f659400c9b5dff6f20e159f7b74 description=kube-system/kindnet-2h7mm/kindnet-cni id=2d75ef78-e1f0-4bee-8ad4-ecf390b221de name=/runtime.v1.RuntimeService/StartContainer sandboxID=501d631c39093d8bdbe0ac12f14195c7454b04430daef50c91a0e51ea9fbd92e
	Nov 15 10:36:24 newest-cni-086099 crio[894]: time="2025-11-15T10:36:24.731938126Z" level=info msg="Starting container: 7a9a8c7d08d9929774aba6fcc959a3367a10380fff7a54a32d207174c420ee96" id=afa74437-9d48-4a9d-a9fd-24c7bd63ec40 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:36:24 newest-cni-086099 crio[894]: time="2025-11-15T10:36:24.735613296Z" level=info msg="Started container" PID=1731 containerID=7a9a8c7d08d9929774aba6fcc959a3367a10380fff7a54a32d207174c420ee96 description=kube-system/kube-proxy-6jpzt/kube-proxy id=afa74437-9d48-4a9d-a9fd-24c7bd63ec40 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4e8bf51611a75c1e12a8f147232a7b14baa4815a472bf4d2916e879b9c5f0ff8
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	b598333c22411       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   1 second ago        Running             kindnet-cni               0                   501d631c39093       kindnet-2h7mm                               kube-system
	7a9a8c7d08d99       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   1 second ago        Running             kube-proxy                0                   4e8bf51611a75       kube-proxy-6jpzt                            kube-system
	cbc6a5a0496e0       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   14 seconds ago      Running             etcd                      0                   f30bfd8f42929       etcd-newest-cni-086099                      kube-system
	9b9f82e052bfb       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   14 seconds ago      Running             kube-scheduler            0                   d20e07c867db6       kube-scheduler-newest-cni-086099            kube-system
	bd4fc9a731432       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   14 seconds ago      Running             kube-controller-manager   0                   c276ce7c4ef33       kube-controller-manager-newest-cni-086099   kube-system
	45e57b26787ba       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   14 seconds ago      Running             kube-apiserver            0                   9c34b428add8b       kube-apiserver-newest-cni-086099            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-086099
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-086099
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=newest-cni-086099
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_36_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:36:15 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-086099
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:36:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:36:18 +0000   Sat, 15 Nov 2025 10:36:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:36:18 +0000   Sat, 15 Nov 2025 10:36:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:36:18 +0000   Sat, 15 Nov 2025 10:36:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 15 Nov 2025 10:36:18 +0000   Sat, 15 Nov 2025 10:36:12 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-086099
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                43538429-02c4-40c8-b533-c24bc0895325
	  Boot ID:                    6a33f504-3a8c-4d49-8fc5-301cf5275efc
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-086099                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8s
	  kube-system                 kindnet-2h7mm                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-086099             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-086099    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-6jpzt                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-086099             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 1s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  15s (x8 over 15s)  kubelet          Node newest-cni-086099 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15s (x8 over 15s)  kubelet          Node newest-cni-086099 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15s (x8 over 15s)  kubelet          Node newest-cni-086099 status is now: NodeHasSufficientPID
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 8s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8s                 kubelet          Node newest-cni-086099 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s                 kubelet          Node newest-cni-086099 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s                 kubelet          Node newest-cni-086099 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s                 node-controller  Node newest-cni-086099 event: Registered Node newest-cni-086099 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 7f 53 f9 15 93 08 06
	[  +0.017821] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff c6 66 25 e9 45 28 08 06
	[ +19.178294] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 a3 65 b9 28 15 08 06
	[  +0.347929] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 6d df 33 a5 ad 08 06
	[  +0.000319] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff a2 7f 53 f9 15 93 08 06
	[Nov15 10:33] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 96 ed 10 e8 9a 80 08 06
	[  +0.000481] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 a3 65 b9 28 15 08 06
	[ +35.214217] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 c2 9a 56 52 0c 08 06
	[  +9.104720] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 c2 c4 d1 7c dd 08 06
	[Nov15 10:34] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 68 1e 80 53 05 08 06
	[  +0.000444] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 c2 c4 d1 7c dd 08 06
	[ +18.836046] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 98 db 78 4f 0b 08 06
	[  +0.000708] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 c2 9a 56 52 0c 08 06
	
	
	==> etcd [cbc6a5a0496e0f0991bedb530b999675e93dcd689a98da5e9b87e97084ceb1cd] <==
	{"level":"warn","ts":"2025-11-15T10:36:14.331577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:14.341931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:14.351072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:14.360888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:14.369651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:14.381371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:14.435738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:14.443292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:14.451794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:14.460536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:14.469703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:14.478627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:14.534265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:14.542271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:14.550852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:14.560093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:14.569361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:14.627349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:14.635062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:14.642583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:14.650934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:14.668736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:14.677779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:14.727398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:14.847495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38850","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:36:26 up  2:18,  0 user,  load average: 3.71, 4.29, 2.79
	Linux newest-cni-086099 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b598333c22411ef046967f2c2bc0e28cd2ba0f659400c9b5dff6f20e159f7b74] <==
	I1115 10:36:24.859562       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:36:24.859829       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1115 10:36:24.860019       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:36:24.860038       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:36:24.860064       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:36:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:36:25.153087       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:36:25.153220       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:36:25.153236       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:36:25.154031       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [45e57b26787ba5110d300e840eb283b0c09d461c501f8119ea4c70f55f9e3b61] <==
	I1115 10:36:15.941615       1 policy_source.go:240] refreshing policies
	E1115 10:36:15.983419       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1115 10:36:16.031399       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 10:36:16.035452       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:36:16.035454       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1115 10:36:16.041771       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:36:16.043064       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 10:36:16.135630       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 10:36:16.778830       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1115 10:36:16.784006       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1115 10:36:16.784025       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:36:17.318329       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:36:17.358303       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:36:17.439194       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1115 10:36:17.445471       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1115 10:36:17.446904       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:36:17.451127       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 10:36:17.840625       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 10:36:18.684412       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 10:36:18.693755       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1115 10:36:18.701751       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1115 10:36:23.543764       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 10:36:23.590781       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1115 10:36:23.694489       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:36:23.698070       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [bd4fc9a731432d09eb4bad8af4b6e393ad958c1364655aaf8b2dbcca25cbe6ef] <==
	I1115 10:36:22.840232       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1115 10:36:22.840262       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1115 10:36:22.840272       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1115 10:36:22.840290       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1115 10:36:22.840415       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1115 10:36:22.840510       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1115 10:36:22.841518       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1115 10:36:22.841559       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1115 10:36:22.842750       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1115 10:36:22.843468       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1115 10:36:22.845862       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:36:22.845941       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 10:36:22.846033       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:36:22.848535       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1115 10:36:22.848766       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1115 10:36:22.848852       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1115 10:36:22.848858       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1115 10:36:22.848865       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1115 10:36:22.848692       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 10:36:22.852117       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1115 10:36:22.855694       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:36:22.855716       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 10:36:22.855725       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1115 10:36:22.929324       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:36:22.940665       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-086099" podCIDRs=["10.42.0.0/24"]
	
	
	==> kube-proxy [7a9a8c7d08d9929774aba6fcc959a3367a10380fff7a54a32d207174c420ee96] <==
	I1115 10:36:24.849488       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:36:24.983327       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:36:25.083994       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:36:25.084052       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1115 10:36:25.084151       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:36:25.103425       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:36:25.103495       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:36:25.109074       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:36:25.110057       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:36:25.110112       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:36:25.112465       1 config.go:200] "Starting service config controller"
	I1115 10:36:25.112484       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:36:25.112523       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:36:25.112531       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:36:25.112552       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:36:25.112613       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:36:25.112603       1 config.go:309] "Starting node config controller"
	I1115 10:36:25.112707       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:36:25.112733       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:36:25.212647       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 10:36:25.212674       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 10:36:25.212695       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [9b9f82e052bfb29345a6d22442a30779bdd3721f56a382298f761a6e6617b076] <==
	E1115 10:36:15.948213       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 10:36:15.948270       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 10:36:15.948276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 10:36:15.948753       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 10:36:15.949030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 10:36:15.949117       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 10:36:15.949138       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 10:36:15.949142       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 10:36:15.949146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 10:36:15.949262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 10:36:15.949260       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 10:36:15.949292       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 10:36:15.949295       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 10:36:15.949331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 10:36:15.949392       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 10:36:16.778567       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 10:36:16.778567       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 10:36:16.802101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 10:36:16.891752       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 10:36:16.932057       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 10:36:16.975888       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 10:36:17.002138       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 10:36:17.086437       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 10:36:17.172370       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1115 10:36:20.046355       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 10:36:19 newest-cni-086099 kubelet[1448]: E1115 10:36:19.696470    1448 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-086099\" already exists" pod="kube-system/etcd-newest-cni-086099"
	Nov 15 10:36:19 newest-cni-086099 kubelet[1448]: I1115 10:36:19.801366    1448 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-086099" podStartSLOduration=1.8013364090000001 podStartE2EDuration="1.801336409s" podCreationTimestamp="2025-11-15 10:36:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:36:19.696739086 +0000 UTC m=+1.267593405" watchObservedRunningTime="2025-11-15 10:36:19.801336409 +0000 UTC m=+1.372190722"
	Nov 15 10:36:19 newest-cni-086099 kubelet[1448]: I1115 10:36:19.801538    1448 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-086099" podStartSLOduration=1.801528529 podStartE2EDuration="1.801528529s" podCreationTimestamp="2025-11-15 10:36:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:36:19.801517394 +0000 UTC m=+1.372371696" watchObservedRunningTime="2025-11-15 10:36:19.801528529 +0000 UTC m=+1.372382844"
	Nov 15 10:36:19 newest-cni-086099 kubelet[1448]: I1115 10:36:19.863267    1448 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-086099" podStartSLOduration=1.863243963 podStartE2EDuration="1.863243963s" podCreationTimestamp="2025-11-15 10:36:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:36:19.863182255 +0000 UTC m=+1.434036572" watchObservedRunningTime="2025-11-15 10:36:19.863243963 +0000 UTC m=+1.434098279"
	Nov 15 10:36:19 newest-cni-086099 kubelet[1448]: I1115 10:36:19.871861    1448 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-086099" podStartSLOduration=1.8718349029999999 podStartE2EDuration="1.871834903s" podCreationTimestamp="2025-11-15 10:36:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:36:19.871588667 +0000 UTC m=+1.442442983" watchObservedRunningTime="2025-11-15 10:36:19.871834903 +0000 UTC m=+1.442689219"
	Nov 15 10:36:23 newest-cni-086099 kubelet[1448]: I1115 10:36:23.035660    1448 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 15 10:36:23 newest-cni-086099 kubelet[1448]: I1115 10:36:23.036509    1448 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 15 10:36:23 newest-cni-086099 kubelet[1448]: I1115 10:36:23.735237    1448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7409c19f-472b-4074-81d0-8e43ac2bc9d4-xtables-lock\") pod \"kube-proxy-6jpzt\" (UID: \"7409c19f-472b-4074-81d0-8e43ac2bc9d4\") " pod="kube-system/kube-proxy-6jpzt"
	Nov 15 10:36:23 newest-cni-086099 kubelet[1448]: I1115 10:36:23.735303    1448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zjnk\" (UniqueName: \"kubernetes.io/projected/7409c19f-472b-4074-81d0-8e43ac2bc9d4-kube-api-access-5zjnk\") pod \"kube-proxy-6jpzt\" (UID: \"7409c19f-472b-4074-81d0-8e43ac2bc9d4\") " pod="kube-system/kube-proxy-6jpzt"
	Nov 15 10:36:23 newest-cni-086099 kubelet[1448]: I1115 10:36:23.735333    1448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1b25f4e6-5f26-42ce-8ceb-56003682c785-cni-cfg\") pod \"kindnet-2h7mm\" (UID: \"1b25f4e6-5f26-42ce-8ceb-56003682c785\") " pod="kube-system/kindnet-2h7mm"
	Nov 15 10:36:23 newest-cni-086099 kubelet[1448]: I1115 10:36:23.735354    1448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b25f4e6-5f26-42ce-8ceb-56003682c785-lib-modules\") pod \"kindnet-2h7mm\" (UID: \"1b25f4e6-5f26-42ce-8ceb-56003682c785\") " pod="kube-system/kindnet-2h7mm"
	Nov 15 10:36:23 newest-cni-086099 kubelet[1448]: I1115 10:36:23.735395    1448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7409c19f-472b-4074-81d0-8e43ac2bc9d4-lib-modules\") pod \"kube-proxy-6jpzt\" (UID: \"7409c19f-472b-4074-81d0-8e43ac2bc9d4\") " pod="kube-system/kube-proxy-6jpzt"
	Nov 15 10:36:23 newest-cni-086099 kubelet[1448]: I1115 10:36:23.735412    1448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b25f4e6-5f26-42ce-8ceb-56003682c785-xtables-lock\") pod \"kindnet-2h7mm\" (UID: \"1b25f4e6-5f26-42ce-8ceb-56003682c785\") " pod="kube-system/kindnet-2h7mm"
	Nov 15 10:36:23 newest-cni-086099 kubelet[1448]: I1115 10:36:23.735443    1448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkmvc\" (UniqueName: \"kubernetes.io/projected/1b25f4e6-5f26-42ce-8ceb-56003682c785-kube-api-access-mkmvc\") pod \"kindnet-2h7mm\" (UID: \"1b25f4e6-5f26-42ce-8ceb-56003682c785\") " pod="kube-system/kindnet-2h7mm"
	Nov 15 10:36:23 newest-cni-086099 kubelet[1448]: I1115 10:36:23.735528    1448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7409c19f-472b-4074-81d0-8e43ac2bc9d4-kube-proxy\") pod \"kube-proxy-6jpzt\" (UID: \"7409c19f-472b-4074-81d0-8e43ac2bc9d4\") " pod="kube-system/kube-proxy-6jpzt"
	Nov 15 10:36:23 newest-cni-086099 kubelet[1448]: E1115 10:36:23.842127    1448 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 15 10:36:23 newest-cni-086099 kubelet[1448]: E1115 10:36:23.842171    1448 projected.go:196] Error preparing data for projected volume kube-api-access-mkmvc for pod kube-system/kindnet-2h7mm: configmap "kube-root-ca.crt" not found
	Nov 15 10:36:23 newest-cni-086099 kubelet[1448]: E1115 10:36:23.842238    1448 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1b25f4e6-5f26-42ce-8ceb-56003682c785-kube-api-access-mkmvc podName:1b25f4e6-5f26-42ce-8ceb-56003682c785 nodeName:}" failed. No retries permitted until 2025-11-15 10:36:24.342216363 +0000 UTC m=+5.913070670 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mkmvc" (UniqueName: "kubernetes.io/projected/1b25f4e6-5f26-42ce-8ceb-56003682c785-kube-api-access-mkmvc") pod "kindnet-2h7mm" (UID: "1b25f4e6-5f26-42ce-8ceb-56003682c785") : configmap "kube-root-ca.crt" not found
	Nov 15 10:36:23 newest-cni-086099 kubelet[1448]: E1115 10:36:23.842122    1448 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 15 10:36:23 newest-cni-086099 kubelet[1448]: E1115 10:36:23.842278    1448 projected.go:196] Error preparing data for projected volume kube-api-access-5zjnk for pod kube-system/kube-proxy-6jpzt: configmap "kube-root-ca.crt" not found
	Nov 15 10:36:23 newest-cni-086099 kubelet[1448]: E1115 10:36:23.842354    1448 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7409c19f-472b-4074-81d0-8e43ac2bc9d4-kube-api-access-5zjnk podName:7409c19f-472b-4074-81d0-8e43ac2bc9d4 nodeName:}" failed. No retries permitted until 2025-11-15 10:36:24.342334429 +0000 UTC m=+5.913188726 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5zjnk" (UniqueName: "kubernetes.io/projected/7409c19f-472b-4074-81d0-8e43ac2bc9d4-kube-api-access-5zjnk") pod "kube-proxy-6jpzt" (UID: "7409c19f-472b-4074-81d0-8e43ac2bc9d4") : configmap "kube-root-ca.crt" not found
	Nov 15 10:36:24 newest-cni-086099 kubelet[1448]: W1115 10:36:24.539004    1448 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/e6860e06d975adb47ad2079893175972a79fb3e0d03ee7ef837f4b7f29e21e14/crio-4e8bf51611a75c1e12a8f147232a7b14baa4815a472bf4d2916e879b9c5f0ff8 WatchSource:0}: Error finding container 4e8bf51611a75c1e12a8f147232a7b14baa4815a472bf4d2916e879b9c5f0ff8: Status 404 returned error can't find the container with id 4e8bf51611a75c1e12a8f147232a7b14baa4815a472bf4d2916e879b9c5f0ff8
	Nov 15 10:36:24 newest-cni-086099 kubelet[1448]: W1115 10:36:24.542418    1448 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/e6860e06d975adb47ad2079893175972a79fb3e0d03ee7ef837f4b7f29e21e14/crio-501d631c39093d8bdbe0ac12f14195c7454b04430daef50c91a0e51ea9fbd92e WatchSource:0}: Error finding container 501d631c39093d8bdbe0ac12f14195c7454b04430daef50c91a0e51ea9fbd92e: Status 404 returned error can't find the container with id 501d631c39093d8bdbe0ac12f14195c7454b04430daef50c91a0e51ea9fbd92e
	Nov 15 10:36:25 newest-cni-086099 kubelet[1448]: I1115 10:36:25.671560    1448 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-2h7mm" podStartSLOduration=2.671535808 podStartE2EDuration="2.671535808s" podCreationTimestamp="2025-11-15 10:36:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:36:25.669987983 +0000 UTC m=+7.240842296" watchObservedRunningTime="2025-11-15 10:36:25.671535808 +0000 UTC m=+7.242390126"
	Nov 15 10:36:25 newest-cni-086099 kubelet[1448]: I1115 10:36:25.683102    1448 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6jpzt" podStartSLOduration=2.683078354 podStartE2EDuration="2.683078354s" podCreationTimestamp="2025-11-15 10:36:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:36:25.682783067 +0000 UTC m=+7.253637431" watchObservedRunningTime="2025-11-15 10:36:25.683078354 +0000 UTC m=+7.253932670"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-086099 -n newest-cni-086099
E1115 10:36:27.172554   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kindnet-931243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-086099 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-rblh2 storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-086099 describe pod coredns-66bc5c9577-rblh2 storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-086099 describe pod coredns-66bc5c9577-rblh2 storage-provisioner: exit status 1 (58.769408ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-rblh2" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-086099 describe pod coredns-66bc5c9577-rblh2 storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-086099 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-086099 --alsologtostderr -v=1: exit status 80 (1.576614802s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-086099 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:36:42.266374  391736 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:36:42.266727  391736 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:36:42.266737  391736 out.go:374] Setting ErrFile to fd 2...
	I1115 10:36:42.266744  391736 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:36:42.267143  391736 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 10:36:42.267508  391736 out.go:368] Setting JSON to false
	I1115 10:36:42.267607  391736 mustload.go:66] Loading cluster: newest-cni-086099
	I1115 10:36:42.268146  391736 config.go:182] Loaded profile config "newest-cni-086099": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:42.268840  391736 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:36:42.289989  391736 host.go:66] Checking if "newest-cni-086099" exists ...
	I1115 10:36:42.290319  391736 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:36:42.351836  391736 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:76 SystemTime:2025-11-15 10:36:42.33777595 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be removed
by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:36:42.352749  391736 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-086099 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1115 10:36:42.355292  391736 out.go:179] * Pausing node newest-cni-086099 ... 
	I1115 10:36:42.356879  391736 host.go:66] Checking if "newest-cni-086099" exists ...
	I1115 10:36:42.357235  391736 ssh_runner.go:195] Run: systemctl --version
	I1115 10:36:42.357280  391736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:42.379996  391736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:42.475224  391736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:36:42.488982  391736 pause.go:52] kubelet running: true
	I1115 10:36:42.489061  391736 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:36:42.627144  391736 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:36:42.627234  391736 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:36:42.709264  391736 cri.go:89] found id: "fd0dca50b719947cba6525c14321a402534c19e3683222e6c08343aef639fcb8"
	I1115 10:36:42.709292  391736 cri.go:89] found id: "b75f03f3e085747d0020a39d05a5df311846f6ea3bfb6996c31eb63e75196968"
	I1115 10:36:42.709300  391736 cri.go:89] found id: "38ec6363bcab12e98490c10ae199bee3e96e76c3b252eb09eaacbceb79f4fee3"
	I1115 10:36:42.709304  391736 cri.go:89] found id: "dcddb7cd9963badc91f7602efc0c7dd3a6aa66928df5288774cb992a0f211b2c"
	I1115 10:36:42.709308  391736 cri.go:89] found id: "938d8a7a407d113893b3543524a1f018292ab3c06ad38c37347f5c09b4f19aed"
	I1115 10:36:42.709314  391736 cri.go:89] found id: "6799daac297c17fbe94a59a4b23a00cc23bdfe7671f3f31f803b074be4ef25a5"
	I1115 10:36:42.709318  391736 cri.go:89] found id: ""
	I1115 10:36:42.709381  391736 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:36:42.721012  391736 retry.go:31] will retry after 325.024392ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:36:42Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:36:43.046498  391736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:36:43.064857  391736 pause.go:52] kubelet running: false
	I1115 10:36:43.065063  391736 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:36:43.192000  391736 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:36:43.192085  391736 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:36:43.266473  391736 cri.go:89] found id: "fd0dca50b719947cba6525c14321a402534c19e3683222e6c08343aef639fcb8"
	I1115 10:36:43.266514  391736 cri.go:89] found id: "b75f03f3e085747d0020a39d05a5df311846f6ea3bfb6996c31eb63e75196968"
	I1115 10:36:43.266520  391736 cri.go:89] found id: "38ec6363bcab12e98490c10ae199bee3e96e76c3b252eb09eaacbceb79f4fee3"
	I1115 10:36:43.266524  391736 cri.go:89] found id: "dcddb7cd9963badc91f7602efc0c7dd3a6aa66928df5288774cb992a0f211b2c"
	I1115 10:36:43.266528  391736 cri.go:89] found id: "938d8a7a407d113893b3543524a1f018292ab3c06ad38c37347f5c09b4f19aed"
	I1115 10:36:43.266532  391736 cri.go:89] found id: "6799daac297c17fbe94a59a4b23a00cc23bdfe7671f3f31f803b074be4ef25a5"
	I1115 10:36:43.266536  391736 cri.go:89] found id: ""
	I1115 10:36:43.266583  391736 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:36:43.283462  391736 retry.go:31] will retry after 194.081378ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:36:43Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:36:43.478791  391736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:36:43.491873  391736 pause.go:52] kubelet running: false
	I1115 10:36:43.491975  391736 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:36:43.672113  391736 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:36:43.672219  391736 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:36:43.745447  391736 cri.go:89] found id: "fd0dca50b719947cba6525c14321a402534c19e3683222e6c08343aef639fcb8"
	I1115 10:36:43.745473  391736 cri.go:89] found id: "b75f03f3e085747d0020a39d05a5df311846f6ea3bfb6996c31eb63e75196968"
	I1115 10:36:43.745478  391736 cri.go:89] found id: "38ec6363bcab12e98490c10ae199bee3e96e76c3b252eb09eaacbceb79f4fee3"
	I1115 10:36:43.745483  391736 cri.go:89] found id: "dcddb7cd9963badc91f7602efc0c7dd3a6aa66928df5288774cb992a0f211b2c"
	I1115 10:36:43.745487  391736 cri.go:89] found id: "938d8a7a407d113893b3543524a1f018292ab3c06ad38c37347f5c09b4f19aed"
	I1115 10:36:43.745491  391736 cri.go:89] found id: "6799daac297c17fbe94a59a4b23a00cc23bdfe7671f3f31f803b074be4ef25a5"
	I1115 10:36:43.745495  391736 cri.go:89] found id: ""
	I1115 10:36:43.745568  391736 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:36:43.760065  391736 out.go:203] 
	W1115 10:36:43.761257  391736 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:36:43Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:36:43Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 10:36:43.761284  391736 out.go:285] * 
	* 
	W1115 10:36:43.768364  391736 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 10:36:43.773050  391736 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-086099 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-086099
helpers_test.go:243: (dbg) docker inspect newest-cni-086099:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e6860e06d975adb47ad2079893175972a79fb3e0d03ee7ef837f4b7f29e21e14",
	        "Created": "2025-11-15T10:35:57.263723596Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 387791,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:36:29.158751083Z",
	            "FinishedAt": "2025-11-15T10:36:27.895734196Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/e6860e06d975adb47ad2079893175972a79fb3e0d03ee7ef837f4b7f29e21e14/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e6860e06d975adb47ad2079893175972a79fb3e0d03ee7ef837f4b7f29e21e14/hostname",
	        "HostsPath": "/var/lib/docker/containers/e6860e06d975adb47ad2079893175972a79fb3e0d03ee7ef837f4b7f29e21e14/hosts",
	        "LogPath": "/var/lib/docker/containers/e6860e06d975adb47ad2079893175972a79fb3e0d03ee7ef837f4b7f29e21e14/e6860e06d975adb47ad2079893175972a79fb3e0d03ee7ef837f4b7f29e21e14-json.log",
	        "Name": "/newest-cni-086099",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-086099:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-086099",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e6860e06d975adb47ad2079893175972a79fb3e0d03ee7ef837f4b7f29e21e14",
	                "LowerDir": "/var/lib/docker/overlay2/9482d5d3fc87916f5559bace41af8ac4799d27e44564678531b92a41e1b27eaf-init/diff:/var/lib/docker/overlay2/507c85b6fb43d9c98216fac79d7a8d08dd20b2a63d0fbfc758336e5d38c04044/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9482d5d3fc87916f5559bace41af8ac4799d27e44564678531b92a41e1b27eaf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9482d5d3fc87916f5559bace41af8ac4799d27e44564678531b92a41e1b27eaf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9482d5d3fc87916f5559bace41af8ac4799d27e44564678531b92a41e1b27eaf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-086099",
	                "Source": "/var/lib/docker/volumes/newest-cni-086099/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-086099",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-086099",
	                "name.minikube.sigs.k8s.io": "newest-cni-086099",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b9413fb79e83e8ccdf506b9c4daaf44e59c92c01504bb3ca5c4abfd806186f2b",
	            "SandboxKey": "/var/run/docker/netns/b9413fb79e83",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-086099": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "09708c4610e17a8aeca1147b11bbc4d170ab97359e0b99b5bd4de917c0e4fd72",
	                    "EndpointID": "17eb7e5e13e076ef0d5938b085776d0625af70ec325f7e005124647b424970fd",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "3e:2e:54:d1:1d:87",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-086099",
	                        "e6860e06d975"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-086099 -n newest-cni-086099
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-086099 -n newest-cni-086099: exit status 2 (355.695345ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-086099 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p no-preload-283677 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ start   │ -p no-preload-283677 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:35 UTC │
	│ addons  │ enable metrics-server -p embed-certs-719574 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ stop    │ -p embed-certs-719574 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ image   │ old-k8s-version-087235 image list --format=json                                                                                                                                                                                               │ old-k8s-version-087235       │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ pause   │ -p old-k8s-version-087235 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-087235       │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ delete  │ -p old-k8s-version-087235                                                                                                                                                                                                                     │ old-k8s-version-087235       │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ addons  │ enable dashboard -p embed-certs-719574 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ start   │ -p embed-certs-719574 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:36 UTC │
	│ delete  │ -p old-k8s-version-087235                                                                                                                                                                                                                     │ old-k8s-version-087235       │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ start   │ -p newest-cni-086099 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:36 UTC │
	│ image   │ no-preload-283677 image list --format=json                                                                                                                                                                                                    │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ pause   │ -p no-preload-283677 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ delete  │ -p no-preload-283677                                                                                                                                                                                                                          │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ delete  │ -p no-preload-283677                                                                                                                                                                                                                          │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-026691 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-026691 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-026691 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-026691 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ addons  │ enable metrics-server -p newest-cni-086099 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ stop    │ -p newest-cni-086099 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ addons  │ enable dashboard -p newest-cni-086099 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ start   │ -p newest-cni-086099 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-026691 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-026691 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ start   │ -p default-k8s-diff-port-026691 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-026691 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ image   │ newest-cni-086099 image list --format=json                                                                                                                                                                                                    │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ pause   │ -p newest-cni-086099 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:36:31
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:36:31.193182  388420 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:36:31.193281  388420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:36:31.193289  388420 out.go:374] Setting ErrFile to fd 2...
	I1115 10:36:31.193293  388420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:36:31.193515  388420 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 10:36:31.193933  388420 out.go:368] Setting JSON to false
	I1115 10:36:31.195111  388420 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8328,"bootTime":1763194663,"procs":266,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 10:36:31.195216  388420 start.go:143] virtualization: kvm guest
	I1115 10:36:31.196894  388420 out.go:179] * [default-k8s-diff-port-026691] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 10:36:31.198076  388420 notify.go:221] Checking for updates...
	I1115 10:36:31.198087  388420 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 10:36:31.199249  388420 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:36:31.200471  388420 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:36:31.201512  388420 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-55448/.minikube
	I1115 10:36:31.202449  388420 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 10:36:31.203634  388420 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:36:31.205205  388420 config.go:182] Loaded profile config "default-k8s-diff-port-026691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:31.205718  388420 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:36:31.228892  388420 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 10:36:31.229044  388420 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:36:31.285898  388420 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:67 SystemTime:2025-11-15 10:36:31.276283811 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:36:31.286032  388420 docker.go:319] overlay module found
	I1115 10:36:31.287655  388420 out.go:179] * Using the docker driver based on existing profile
	I1115 10:36:31.288859  388420 start.go:309] selected driver: docker
	I1115 10:36:31.288877  388420 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-026691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-026691 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:36:31.288972  388420 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:36:31.289812  388420 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:36:31.352009  388420 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:66 SystemTime:2025-11-15 10:36:31.342104199 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:36:31.352371  388420 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:36:31.352408  388420 cni.go:84] Creating CNI manager for ""
	I1115 10:36:31.352457  388420 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:36:31.352498  388420 start.go:353] cluster config:
	{Name:default-k8s-diff-port-026691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-026691 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:36:31.354418  388420 out.go:179] * Starting "default-k8s-diff-port-026691" primary control-plane node in "default-k8s-diff-port-026691" cluster
	I1115 10:36:31.355595  388420 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:36:31.356825  388420 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:36:31.357856  388420 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:36:31.357890  388420 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1115 10:36:31.357905  388420 cache.go:65] Caching tarball of preloaded images
	I1115 10:36:31.357944  388420 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:36:31.358020  388420 preload.go:238] Found /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 10:36:31.358036  388420 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:36:31.358136  388420 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/config.json ...
	I1115 10:36:31.378843  388420 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:36:31.378864  388420 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:36:31.378881  388420 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:36:31.378904  388420 start.go:360] acquireMachinesLock for default-k8s-diff-port-026691: {Name:mk1f3196dd9a24a043fa707553211d0b0ea8c1f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:36:31.378986  388420 start.go:364] duration metric: took 61.257µs to acquireMachinesLock for "default-k8s-diff-port-026691"
	I1115 10:36:31.379010  388420 start.go:96] Skipping create...Using existing machine configuration
	I1115 10:36:31.379018  388420 fix.go:54] fixHost starting: 
	I1115 10:36:31.379252  388420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:36:31.397025  388420 fix.go:112] recreateIfNeeded on default-k8s-diff-port-026691: state=Stopped err=<nil>
	W1115 10:36:31.397068  388420 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 10:36:29.135135  387591 out.go:252] * Restarting existing docker container for "newest-cni-086099" ...
	I1115 10:36:29.135222  387591 cli_runner.go:164] Run: docker start newest-cni-086099
	I1115 10:36:29.412428  387591 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:36:29.431258  387591 kic.go:430] container "newest-cni-086099" state is running.
	I1115 10:36:29.431760  387591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-086099
	I1115 10:36:29.450271  387591 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/config.json ...
	I1115 10:36:29.450487  387591 machine.go:94] provisionDockerMachine start ...
	I1115 10:36:29.450542  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:29.468796  387591 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:29.469141  387591 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1115 10:36:29.469158  387591 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:36:29.469768  387591 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43374->127.0.0.1:33129: read: connection reset by peer
	I1115 10:36:32.597021  387591 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-086099
	
	I1115 10:36:32.597063  387591 ubuntu.go:182] provisioning hostname "newest-cni-086099"
	I1115 10:36:32.597140  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:32.616934  387591 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:32.617209  387591 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1115 10:36:32.617233  387591 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-086099 && echo "newest-cni-086099" | sudo tee /etc/hostname
	I1115 10:36:32.756237  387591 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-086099
	
	I1115 10:36:32.756329  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:32.775168  387591 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:32.775389  387591 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1115 10:36:32.775405  387591 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-086099' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-086099/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-086099' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:36:32.902668  387591 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:36:32.902701  387591 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-55448/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-55448/.minikube}
	I1115 10:36:32.902736  387591 ubuntu.go:190] setting up certificates
	I1115 10:36:32.902754  387591 provision.go:84] configureAuth start
	I1115 10:36:32.902811  387591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-086099
	I1115 10:36:32.921923  387591 provision.go:143] copyHostCerts
	I1115 10:36:32.922017  387591 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem, removing ...
	I1115 10:36:32.922035  387591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem
	I1115 10:36:32.922102  387591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem (1082 bytes)
	I1115 10:36:32.922216  387591 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem, removing ...
	I1115 10:36:32.922225  387591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem
	I1115 10:36:32.922253  387591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem (1123 bytes)
	I1115 10:36:32.922341  387591 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem, removing ...
	I1115 10:36:32.922348  387591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem
	I1115 10:36:32.922372  387591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem (1679 bytes)
	I1115 10:36:32.922421  387591 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem org=jenkins.newest-cni-086099 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-086099]
	I1115 10:36:32.940854  387591 provision.go:177] copyRemoteCerts
	I1115 10:36:32.940914  387591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:36:32.940948  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:32.958931  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:33.053731  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:36:33.071243  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:36:33.088651  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1115 10:36:33.105219  387591 provision.go:87] duration metric: took 202.453369ms to configureAuth
	I1115 10:36:33.105244  387591 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:36:33.105414  387591 config.go:182] Loaded profile config "newest-cni-086099": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:33.105509  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:33.123012  387591 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:33.123259  387591 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1115 10:36:33.123277  387591 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:36:33.389799  387591 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:36:33.389822  387591 machine.go:97] duration metric: took 3.93932207s to provisionDockerMachine
	I1115 10:36:33.389835  387591 start.go:293] postStartSetup for "newest-cni-086099" (driver="docker")
	I1115 10:36:33.389844  387591 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:36:33.389903  387591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:36:33.389946  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:33.409403  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:33.503330  387591 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:36:33.506790  387591 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:36:33.506815  387591 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:36:33.506825  387591 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/addons for local assets ...
	I1115 10:36:33.506878  387591 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/files for local assets ...
	I1115 10:36:33.506995  387591 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem -> 589622.pem in /etc/ssl/certs
	I1115 10:36:33.507126  387591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:36:33.514570  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:36:33.531880  387591 start.go:296] duration metric: took 142.028023ms for postStartSetup
	I1115 10:36:33.532012  387591 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:36:33.532066  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:33.549908  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:33.640348  387591 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:36:33.645124  387591 fix.go:56] duration metric: took 4.529931109s for fixHost
	I1115 10:36:33.645164  387591 start.go:83] releasing machines lock for "newest-cni-086099", held for 4.529982501s
	I1115 10:36:33.645246  387591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-086099
	I1115 10:36:33.663364  387591 ssh_runner.go:195] Run: cat /version.json
	I1115 10:36:33.663400  387591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:36:33.663445  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:33.663461  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:33.682200  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:33.682521  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:33.827221  387591 ssh_runner.go:195] Run: systemctl --version
	I1115 10:36:33.834019  387591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:36:33.868151  387591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:36:33.872995  387591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:36:33.873067  387591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:36:33.881540  387591 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 10:36:33.881563  387591 start.go:496] detecting cgroup driver to use...
	I1115 10:36:33.881595  387591 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:36:33.881628  387591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:36:33.895704  387591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:36:33.907633  387591 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:36:33.907681  387591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:36:33.921408  387591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	W1115 10:36:30.745845  377744 pod_ready.go:104] pod "coredns-66bc5c9577-fjzk5" is not "Ready", error: <nil>
	W1115 10:36:32.746544  377744 pod_ready.go:104] pod "coredns-66bc5c9577-fjzk5" is not "Ready", error: <nil>
	I1115 10:36:33.933689  387591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:36:34.015025  387591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:36:34.097166  387591 docker.go:234] disabling docker service ...
	I1115 10:36:34.097250  387591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:36:34.111501  387591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:36:34.123898  387591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:36:34.208076  387591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:36:34.289077  387591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:36:34.302010  387591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:36:34.316333  387591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:36:34.316409  387591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.325113  387591 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:36:34.325175  387591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.333844  387591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.342343  387591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.350817  387591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:36:34.359269  387591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.368008  387591 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.376100  387591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.384822  387591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:36:34.392091  387591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:36:34.399149  387591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:34.478616  387591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:36:34.580323  387591 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:36:34.580408  387591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:36:34.584509  387591 start.go:564] Will wait 60s for crictl version
	I1115 10:36:34.584568  387591 ssh_runner.go:195] Run: which crictl
	I1115 10:36:34.588078  387591 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:36:34.613070  387591 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:36:34.613150  387591 ssh_runner.go:195] Run: crio --version
	I1115 10:36:34.641080  387591 ssh_runner.go:195] Run: crio --version
	I1115 10:36:34.670335  387591 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:36:34.671690  387591 cli_runner.go:164] Run: docker network inspect newest-cni-086099 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:36:34.689678  387591 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1115 10:36:34.693973  387591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:36:34.705342  387591 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1115 10:36:31.398937  388420 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-026691" ...
	I1115 10:36:31.399016  388420 cli_runner.go:164] Run: docker start default-k8s-diff-port-026691
	I1115 10:36:31.676189  388420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:36:31.694382  388420 kic.go:430] container "default-k8s-diff-port-026691" state is running.
	I1115 10:36:31.694751  388420 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-026691
	I1115 10:36:31.713425  388420 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/config.json ...
	I1115 10:36:31.713652  388420 machine.go:94] provisionDockerMachine start ...
	I1115 10:36:31.713746  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:31.732991  388420 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:31.733252  388420 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1115 10:36:31.733277  388420 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:36:31.734038  388420 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45950->127.0.0.1:33134: read: connection reset by peer
	I1115 10:36:34.867843  388420 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-026691
	
	I1115 10:36:34.867883  388420 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-026691"
	I1115 10:36:34.868072  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:34.887800  388420 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:34.888079  388420 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1115 10:36:34.888098  388420 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-026691 && echo "default-k8s-diff-port-026691" | sudo tee /etc/hostname
	I1115 10:36:35.027312  388420 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-026691
	
	I1115 10:36:35.027402  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:35.049307  388420 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:35.049620  388420 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1115 10:36:35.049653  388420 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-026691' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-026691/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-026691' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:36:35.185792  388420 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:36:35.185824  388420 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-55448/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-55448/.minikube}
	I1115 10:36:35.185877  388420 ubuntu.go:190] setting up certificates
	I1115 10:36:35.185889  388420 provision.go:84] configureAuth start
	I1115 10:36:35.185975  388420 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-026691
	I1115 10:36:35.205215  388420 provision.go:143] copyHostCerts
	I1115 10:36:35.205302  388420 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem, removing ...
	I1115 10:36:35.205325  388420 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem
	I1115 10:36:35.205419  388420 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem (1082 bytes)
	I1115 10:36:35.205578  388420 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem, removing ...
	I1115 10:36:35.205600  388420 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem
	I1115 10:36:35.205648  388420 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem (1123 bytes)
	I1115 10:36:35.205811  388420 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem, removing ...
	I1115 10:36:35.205831  388420 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem
	I1115 10:36:35.205877  388420 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem (1679 bytes)
	I1115 10:36:35.205988  388420 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-026691 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-026691 localhost minikube]
	I1115 10:36:35.356382  388420 provision.go:177] copyRemoteCerts
	I1115 10:36:35.356441  388420 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:36:35.356476  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:35.375752  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:35.470476  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:36:35.488150  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1115 10:36:35.505264  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:36:35.522854  388420 provision.go:87] duration metric: took 336.947608ms to configureAuth
	I1115 10:36:35.522880  388420 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:36:35.523120  388420 config.go:182] Loaded profile config "default-k8s-diff-port-026691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:35.523282  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:35.543167  388420 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:35.543480  388420 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1115 10:36:35.543509  388420 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:36:35.848476  388420 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:36:35.848509  388420 machine.go:97] duration metric: took 4.134839636s to provisionDockerMachine
	I1115 10:36:35.848525  388420 start.go:293] postStartSetup for "default-k8s-diff-port-026691" (driver="docker")
	I1115 10:36:35.848541  388420 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:36:35.848616  388420 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:36:35.848671  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:35.868537  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:35.963605  388420 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:36:35.967175  388420 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:36:35.967199  388420 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:36:35.967209  388420 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/addons for local assets ...
	I1115 10:36:35.967263  388420 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/files for local assets ...
	I1115 10:36:35.967339  388420 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem -> 589622.pem in /etc/ssl/certs
	I1115 10:36:35.967422  388420 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:36:35.975404  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:36:35.992754  388420 start.go:296] duration metric: took 144.211835ms for postStartSetup
	I1115 10:36:35.992851  388420 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:36:35.992902  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:36.010853  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:36.106652  388420 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:36:36.111301  388420 fix.go:56] duration metric: took 4.732276816s for fixHost
	I1115 10:36:36.111327  388420 start.go:83] releasing machines lock for "default-k8s-diff-port-026691", held for 4.732326241s
	I1115 10:36:36.111401  388420 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-026691
	I1115 10:36:36.133087  388420 ssh_runner.go:195] Run: cat /version.json
	I1115 10:36:36.133147  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:36.133224  388420 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:36:36.133295  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:36.161597  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:36.162169  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:34.706341  387591 kubeadm.go:884] updating cluster {Name:newest-cni-086099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-086099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MouE1115 10:36:45.167919   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/auto-931243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:36:34.706463  387591 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:36:34.706520  387591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:36:34.737832  387591 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:36:34.737871  387591 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:36:34.737929  387591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:36:34.765628  387591 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:36:34.765650  387591 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:36:34.765657  387591 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1115 10:36:34.765750  387591 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-086099 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-086099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:36:34.765813  387591 ssh_runner.go:195] Run: crio config
	I1115 10:36:34.812764  387591 cni.go:84] Creating CNI manager for ""
	I1115 10:36:34.812787  387591 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:36:34.812806  387591 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1115 10:36:34.812836  387591 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-086099 NodeName:newest-cni-086099 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:36:34.813018  387591 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-086099"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:36:34.813097  387591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:36:34.821514  387591 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:36:34.821582  387591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:36:34.829425  387591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1115 10:36:34.841803  387591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:36:34.854099  387591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1115 10:36:34.867123  387591 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:36:34.871300  387591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:36:34.882157  387591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:34.965624  387591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:36:34.991396  387591 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099 for IP: 192.168.103.2
	I1115 10:36:34.991421  387591 certs.go:195] generating shared ca certs ...
	I1115 10:36:34.991442  387591 certs.go:227] acquiring lock for ca certs: {Name:mkec8c0ab2b564f4fafe1f7fc86089efa994f6db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:34.991611  387591 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key
	I1115 10:36:34.991670  387591 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key
	I1115 10:36:34.991685  387591 certs.go:257] generating profile certs ...
	I1115 10:36:34.991800  387591 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/client.key
	I1115 10:36:34.991881  387591 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.key.d719cdad
	I1115 10:36:34.991938  387591 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.key
	I1115 10:36:34.992114  387591 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem (1338 bytes)
	W1115 10:36:34.992160  387591 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962_empty.pem, impossibly tiny 0 bytes
	I1115 10:36:34.992182  387591 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:36:34.992223  387591 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:36:34.992266  387591 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:36:34.992298  387591 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem (1679 bytes)
	I1115 10:36:34.992360  387591 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:36:34.993060  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:36:35.012346  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:36:35.032525  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:36:35.052616  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:36:35.116969  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1115 10:36:35.141400  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 10:36:35.160318  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:36:35.178367  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:36:35.231343  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /usr/share/ca-certificates/589622.pem (1708 bytes)
	I1115 10:36:35.251073  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:36:35.269574  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem --> /usr/share/ca-certificates/58962.pem (1338 bytes)
	I1115 10:36:35.287839  387591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:36:35.300609  387591 ssh_runner.go:195] Run: openssl version
	I1115 10:36:35.306757  387591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/58962.pem && ln -fs /usr/share/ca-certificates/58962.pem /etc/ssl/certs/58962.pem"
	I1115 10:36:35.315111  387591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/58962.pem
	I1115 10:36:35.318673  387591 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:47 /usr/share/ca-certificates/58962.pem
	I1115 10:36:35.318726  387591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/58962.pem
	I1115 10:36:35.352595  387591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/58962.pem /etc/ssl/certs/51391683.0"
	I1115 10:36:35.360661  387591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/589622.pem && ln -fs /usr/share/ca-certificates/589622.pem /etc/ssl/certs/589622.pem"
	I1115 10:36:35.369044  387591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/589622.pem
	I1115 10:36:35.373102  387591 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:47 /usr/share/ca-certificates/589622.pem
	I1115 10:36:35.373149  387591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/589622.pem
	I1115 10:36:35.407763  387591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/589622.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:36:35.416805  387591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:36:35.426105  387591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:35.429879  387591 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:41 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:35.429928  387591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:35.464376  387591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:36:35.472689  387591 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:36:35.476537  387591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 10:36:35.513422  387591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 10:36:35.552107  387591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 10:36:35.627892  387591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 10:36:35.738207  387591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 10:36:35.927631  387591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 10:36:36.020791  387591 kubeadm.go:401] StartCluster: {Name:newest-cni-086099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-086099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:36:36.020915  387591 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:36:36.020993  387591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:36:36.054712  387591 cri.go:89] found id: "38ec6363bcab12e98490c10ae199bee3e96e76c3b252eb09eaacbceb79f4fee3"
	I1115 10:36:36.054741  387591 cri.go:89] found id: "dcddb7cd9963badc91f7602efc0c7dd3a6aa66928df5288774cb992a0f211b2c"
	I1115 10:36:36.054748  387591 cri.go:89] found id: "938d8a7a407d113893b3543524a1f018292ab3c06ad38c37347f5c09b4f19aed"
	I1115 10:36:36.054753  387591 cri.go:89] found id: "6799daac297c17fbe94a59a4b23a00cc23bdfe7671f3f31f803b074be4ef25a5"
	I1115 10:36:36.054758  387591 cri.go:89] found id: ""
	I1115 10:36:36.054810  387591 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 10:36:36.122342  387591 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:36:36Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:36:36.122434  387591 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:36:36.132788  387591 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 10:36:36.132807  387591 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 10:36:36.132853  387591 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 10:36:36.144175  387591 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:36:36.145209  387591 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-086099" does not appear in /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:36:36.145870  387591 kubeconfig.go:62] /home/jenkins/minikube-integration/21894-55448/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-086099" cluster setting kubeconfig missing "newest-cni-086099" context setting]
	I1115 10:36:36.146847  387591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:36.149871  387591 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 10:36:36.217177  387591 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1115 10:36:36.217217  387591 kubeadm.go:602] duration metric: took 84.40299ms to restartPrimaryControlPlane
	I1115 10:36:36.217231  387591 kubeadm.go:403] duration metric: took 196.454161ms to StartCluster
	I1115 10:36:36.217253  387591 settings.go:142] acquiring lock: {Name:mk94cfac0f6eef2479181468a7a8082a6e5c8f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:36.217343  387591 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:36:36.218632  387591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:36.218872  387591 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:36:36.218972  387591 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:36:36.219074  387591 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-086099"
	I1115 10:36:36.219094  387591 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-086099"
	W1115 10:36:36.219105  387591 addons.go:248] addon storage-provisioner should already be in state true
	I1115 10:36:36.219138  387591 host.go:66] Checking if "newest-cni-086099" exists ...
	I1115 10:36:36.219158  387591 addons.go:70] Setting dashboard=true in profile "newest-cni-086099"
	I1115 10:36:36.219163  387591 config.go:182] Loaded profile config "newest-cni-086099": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:36.219193  387591 addons.go:239] Setting addon dashboard=true in "newest-cni-086099"
	W1115 10:36:36.219202  387591 addons.go:248] addon dashboard should already be in state true
	I1115 10:36:36.219217  387591 addons.go:70] Setting default-storageclass=true in profile "newest-cni-086099"
	I1115 10:36:36.219235  387591 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-086099"
	I1115 10:36:36.219248  387591 host.go:66] Checking if "newest-cni-086099" exists ...
	I1115 10:36:36.219557  387591 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:36:36.219712  387591 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:36:36.219712  387591 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:36:36.220680  387591 out.go:179] * Verifying Kubernetes components...
	I1115 10:36:36.221665  387591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:36.248161  387591 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:36:36.248172  387591 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1115 10:36:36.249608  387591 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:36:36.249628  387591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:36:36.249683  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:36.249733  387591 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1115 10:36:36.324481  388420 ssh_runner.go:195] Run: systemctl --version
	I1115 10:36:36.336623  388420 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:36:36.372576  388420 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:36:36.377572  388420 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:36:36.377633  388420 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:36:36.385687  388420 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 10:36:36.385710  388420 start.go:496] detecting cgroup driver to use...
	I1115 10:36:36.385740  388420 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:36:36.385776  388420 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:36:36.399728  388420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:36:36.411622  388420 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:36:36.411694  388420 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:36:36.431786  388420 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:36:36.449270  388420 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:36:36.538378  388420 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:36:36.622459  388420 docker.go:234] disabling docker service ...
	I1115 10:36:36.622563  388420 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:36:36.644022  388420 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:36:36.656349  388420 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:36:36.757453  388420 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:36:36.851752  388420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:36:36.864024  388420 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:36:36.878189  388420 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:36:36.878243  388420 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.886869  388420 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:36:36.886944  388420 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.895649  388420 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.904129  388420 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.912660  388420 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:36:36.922601  388420 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.934730  388420 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.945527  388420 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.955227  388420 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:36:36.962702  388420 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:36:36.969927  388420 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:37.064102  388420 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:36:37.181392  388420 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:36:37.181469  388420 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:36:37.185705  388420 start.go:564] Will wait 60s for crictl version
	I1115 10:36:37.185759  388420 ssh_runner.go:195] Run: which crictl
	I1115 10:36:37.189374  388420 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:36:37.214797  388420 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:36:37.214872  388420 ssh_runner.go:195] Run: crio --version
	I1115 10:36:37.247024  388420 ssh_runner.go:195] Run: crio --version
	I1115 10:36:37.283127  388420 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1115 10:36:35.246243  377744 pod_ready.go:104] pod "coredns-66bc5c9577-fjzk5" is not "Ready", error: <nil>
	I1115 10:36:37.246256  377744 pod_ready.go:94] pod "coredns-66bc5c9577-fjzk5" is "Ready"
	I1115 10:36:37.246283  377744 pod_ready.go:86] duration metric: took 33.505674032s for pod "coredns-66bc5c9577-fjzk5" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.248931  377744 pod_ready.go:83] waiting for pod "etcd-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.253449  377744 pod_ready.go:94] pod "etcd-embed-certs-719574" is "Ready"
	I1115 10:36:37.253477  377744 pod_ready.go:86] duration metric: took 4.523106ms for pod "etcd-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.258749  377744 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.262996  377744 pod_ready.go:94] pod "kube-apiserver-embed-certs-719574" is "Ready"
	I1115 10:36:37.263019  377744 pod_ready.go:86] duration metric: took 4.2473ms for pod "kube-apiserver-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.265400  377744 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.444138  377744 pod_ready.go:94] pod "kube-controller-manager-embed-certs-719574" is "Ready"
	I1115 10:36:37.444168  377744 pod_ready.go:86] duration metric: took 178.743562ms for pod "kube-controller-manager-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.644722  377744 pod_ready.go:83] waiting for pod "kube-proxy-kmc8c" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:38.044247  377744 pod_ready.go:94] pod "kube-proxy-kmc8c" is "Ready"
	I1115 10:36:38.044277  377744 pod_ready.go:86] duration metric: took 399.527336ms for pod "kube-proxy-kmc8c" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:38.245350  377744 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:38.644894  377744 pod_ready.go:94] pod "kube-scheduler-embed-certs-719574" is "Ready"
	I1115 10:36:38.645014  377744 pod_ready.go:86] duration metric: took 399.62796ms for pod "kube-scheduler-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:38.645030  377744 pod_ready.go:40] duration metric: took 34.90782271s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:36:38.702511  377744 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:36:38.706562  377744 out.go:179] * Done! kubectl is now configured to use "embed-certs-719574" cluster and "default" namespace by default
	I1115 10:36:37.284492  388420 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-026691 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:36:37.302095  388420 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1115 10:36:37.306321  388420 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:36:37.316768  388420 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-026691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-026691 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:36:37.316911  388420 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:36:37.316980  388420 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:36:37.354039  388420 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:36:37.354063  388420 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:36:37.354121  388420 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:36:37.384223  388420 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:36:37.384249  388420 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:36:37.384257  388420 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1115 10:36:37.384353  388420 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-026691 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-026691 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:36:37.384416  388420 ssh_runner.go:195] Run: crio config
	I1115 10:36:37.429588  388420 cni.go:84] Creating CNI manager for ""
	I1115 10:36:37.429616  388420 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:36:37.429637  388420 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:36:37.429663  388420 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-026691 NodeName:default-k8s-diff-port-026691 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:36:37.429840  388420 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-026691"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:36:37.429922  388420 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:36:37.438488  388420 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:36:37.438583  388420 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:36:37.446984  388420 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1115 10:36:37.459608  388420 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:36:37.472652  388420 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1115 10:36:37.484924  388420 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:36:37.488541  388420 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:36:37.498126  388420 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:37.587175  388420 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:36:37.609456  388420 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691 for IP: 192.168.85.2
	I1115 10:36:37.609480  388420 certs.go:195] generating shared ca certs ...
	I1115 10:36:37.609501  388420 certs.go:227] acquiring lock for ca certs: {Name:mkec8c0ab2b564f4fafe1f7fc86089efa994f6db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:37.609671  388420 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key
	I1115 10:36:37.609735  388420 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key
	I1115 10:36:37.609750  388420 certs.go:257] generating profile certs ...
	I1115 10:36:37.609859  388420 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/client.key
	I1115 10:36:37.609921  388420 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.key.f8824eec
	I1115 10:36:37.610007  388420 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.key
	I1115 10:36:37.610146  388420 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem (1338 bytes)
	W1115 10:36:37.610198  388420 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962_empty.pem, impossibly tiny 0 bytes
	I1115 10:36:37.610212  388420 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:36:37.610244  388420 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:36:37.610278  388420 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:36:37.610306  388420 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem (1679 bytes)
	I1115 10:36:37.610359  388420 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:36:37.611122  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:36:37.629925  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:36:37.650833  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:36:37.671862  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:36:37.696427  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1115 10:36:37.763348  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 10:36:37.782654  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:36:37.800720  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:36:37.817628  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:36:37.835327  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem --> /usr/share/ca-certificates/58962.pem (1338 bytes)
	I1115 10:36:37.856769  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /usr/share/ca-certificates/589622.pem (1708 bytes)
	I1115 10:36:37.876039  388420 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:36:37.891255  388420 ssh_runner.go:195] Run: openssl version
	I1115 10:36:37.898994  388420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/58962.pem && ln -fs /usr/share/ca-certificates/58962.pem /etc/ssl/certs/58962.pem"
	I1115 10:36:37.907571  388420 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/58962.pem
	I1115 10:36:37.912280  388420 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:47 /usr/share/ca-certificates/58962.pem
	I1115 10:36:37.912337  388420 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/58962.pem
	I1115 10:36:37.950692  388420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/58962.pem /etc/ssl/certs/51391683.0"
	I1115 10:36:37.959456  388420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/589622.pem && ln -fs /usr/share/ca-certificates/589622.pem /etc/ssl/certs/589622.pem"
	I1115 10:36:37.968450  388420 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/589622.pem
	I1115 10:36:37.972465  388420 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:47 /usr/share/ca-certificates/589622.pem
	I1115 10:36:37.972521  388420 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/589622.pem
	I1115 10:36:38.008129  388420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/589622.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:36:38.016745  388420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:36:38.027414  388420 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:38.031718  388420 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:41 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:38.031792  388420 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:38.077405  388420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:36:38.086004  388420 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:36:38.089990  388420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 10:36:38.127939  388420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 10:36:38.181791  388420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 10:36:38.256153  388420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 10:36:38.368577  388420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 10:36:38.543333  388420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 10:36:38.645754  388420 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-026691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-026691 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:36:38.645863  388420 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:36:38.645935  388420 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:36:38.685210  388420 cri.go:89] found id: "b04411b3a023360282fda35601339f2000829b38e465e5c4c130b7c58e111bb3"
	I1115 10:36:38.685237  388420 cri.go:89] found id: "971b8e4c2073b59c0dd66594f19cfc1d0b9f81cb1bad24b21946d5ea7b012768"
	I1115 10:36:38.685254  388420 cri.go:89] found id: "6a1db649ea51d34c5661db65312d1c8660828376983f865ce0b3d2801b219c2d"
	I1115 10:36:38.685259  388420 cri.go:89] found id: "58595dd2cf4ce1cb8f740a6594f3ee7a3c7d2587682b4e6c526266c0067303a7"
	I1115 10:36:38.685262  388420 cri.go:89] found id: ""
	I1115 10:36:38.685312  388420 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 10:36:38.750674  388420 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:36:38Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:36:38.750744  388420 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:36:38.769157  388420 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 10:36:38.769186  388420 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 10:36:38.769238  388420 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 10:36:38.842499  388420 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:36:38.845337  388420 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-026691" does not appear in /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:36:38.846840  388420 kubeconfig.go:62] /home/jenkins/minikube-integration/21894-55448/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-026691" cluster setting kubeconfig missing "default-k8s-diff-port-026691" context setting]
	I1115 10:36:38.849516  388420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:38.855210  388420 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 10:36:38.870026  388420 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1115 10:36:38.870059  388420 kubeadm.go:602] duration metric: took 100.86647ms to restartPrimaryControlPlane
	I1115 10:36:38.870073  388420 kubeadm.go:403] duration metric: took 224.328768ms to StartCluster
	I1115 10:36:38.870094  388420 settings.go:142] acquiring lock: {Name:mk94cfac0f6eef2479181468a7a8082a6e5c8f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:38.870172  388420 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:36:38.872536  388420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:38.872812  388420 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:36:38.873059  388420 config.go:182] Loaded profile config "default-k8s-diff-port-026691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:38.873024  388420 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:36:38.873181  388420 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-026691"
	I1115 10:36:38.873220  388420 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-026691"
	W1115 10:36:38.873240  388420 addons.go:248] addon storage-provisioner should already be in state true
	I1115 10:36:38.873315  388420 host.go:66] Checking if "default-k8s-diff-port-026691" exists ...
	I1115 10:36:38.873258  388420 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-026691"
	I1115 10:36:38.873640  388420 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-026691"
	W1115 10:36:38.873663  388420 addons.go:248] addon dashboard should already be in state true
	I1115 10:36:38.873444  388420 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-026691"
	I1115 10:36:38.873728  388420 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-026691"
	I1115 10:36:38.873753  388420 host.go:66] Checking if "default-k8s-diff-port-026691" exists ...
	I1115 10:36:38.874091  388420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:36:38.874589  388420 out.go:179] * Verifying Kubernetes components...
	I1115 10:36:38.874818  388420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:36:38.875168  388420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:36:38.876706  388420 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:38.907308  388420 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:36:38.907363  388420 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-026691"
	W1115 10:36:38.907464  388420 addons.go:248] addon default-storageclass should already be in state true
	I1115 10:36:38.907503  388420 host.go:66] Checking if "default-k8s-diff-port-026691" exists ...
	I1115 10:36:38.908043  388420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:36:38.912208  388420 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:36:38.912236  388420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:36:38.912295  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:38.915346  388420 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1115 10:36:38.916793  388420 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1115 10:36:36.250323  387591 addons.go:239] Setting addon default-storageclass=true in "newest-cni-086099"
	W1115 10:36:36.250350  387591 addons.go:248] addon default-storageclass should already be in state true
	I1115 10:36:36.250389  387591 host.go:66] Checking if "newest-cni-086099" exists ...
	I1115 10:36:36.251476  387591 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:36:36.255103  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1115 10:36:36.255128  387591 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1115 10:36:36.255190  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:36.278537  387591 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:36:36.278565  387591 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:36:36.278644  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:36.280814  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:36.281721  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:36.296440  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:36.630526  387591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:36:36.633566  387591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:36:36.636633  387591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:36:36.638099  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1115 10:36:36.638116  387591 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1115 10:36:36.724472  387591 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:36:36.724559  387591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:36:36.729948  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1115 10:36:36.730015  387591 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1115 10:36:36.826253  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1115 10:36:36.826282  387591 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1115 10:36:36.843537  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1115 10:36:36.843560  387591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1115 10:36:36.931895  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1115 10:36:36.931924  387591 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1115 10:36:36.945766  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1115 10:36:36.945791  387591 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1115 10:36:37.023562  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1115 10:36:37.023593  387591 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1115 10:36:37.038918  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1115 10:36:37.038944  387591 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1115 10:36:37.052909  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:36:37.052937  387591 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1115 10:36:37.119950  387591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:36:39.816288  387591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.182684264s)
	I1115 10:36:40.959315  387591 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.234727667s)
	I1115 10:36:40.959363  387591 api_server.go:72] duration metric: took 4.740464162s to wait for apiserver process to appear ...
	I1115 10:36:40.959371  387591 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:36:40.959395  387591 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1115 10:36:40.959325  387591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.322653976s)
	I1115 10:36:40.959440  387591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.839423734s)
	I1115 10:36:40.962518  387591 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-086099 addons enable metrics-server
	
	I1115 10:36:40.964092  387591 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1115 10:36:38.917819  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1115 10:36:38.917851  388420 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1115 10:36:38.917924  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:38.930932  388420 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:36:38.930982  388420 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:36:38.931053  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:38.933702  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:38.939670  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:38.960258  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:39.257807  388420 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:36:39.264707  388420 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:36:39.270235  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1115 10:36:39.270261  388420 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1115 10:36:39.274532  388420 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-026691" to be "Ready" ...
	I1115 10:36:39.351682  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1115 10:36:39.351725  388420 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1115 10:36:39.357989  388420 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:36:39.374984  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1115 10:36:39.375011  388420 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1115 10:36:39.457352  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1115 10:36:39.457377  388420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1115 10:36:39.542591  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1115 10:36:39.542618  388420 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1115 10:36:39.565925  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1115 10:36:39.566041  388420 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1115 10:36:39.580123  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1115 10:36:39.580242  388420 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1115 10:36:39.655102  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1115 10:36:39.655149  388420 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1115 10:36:39.669218  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:36:39.669246  388420 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1115 10:36:39.683183  388420 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:36:40.965416  387591 addons.go:515] duration metric: took 4.746465999s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1115 10:36:40.965454  387591 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:36:40.965477  387591 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:36:41.460167  387591 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1115 10:36:41.465475  387591 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1115 10:36:41.466642  387591 api_server.go:141] control plane version: v1.34.1
	I1115 10:36:41.466668  387591 api_server.go:131] duration metric: took 507.289044ms to wait for apiserver health ...
	I1115 10:36:41.466679  387591 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:36:41.470116  387591 system_pods.go:59] 8 kube-system pods found
	I1115 10:36:41.470165  387591 system_pods.go:61] "coredns-66bc5c9577-rblh2" [903029e0-3b15-43f3-836a-884de528cbc9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1115 10:36:41.470180  387591 system_pods.go:61] "etcd-newest-cni-086099" [6768a007-08a6-47b0-9917-cf54f577829b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:36:41.470190  387591 system_pods.go:61] "kindnet-2h7mm" [1b25f4e6-5f26-42ce-8ceb-56003682c785] Running
	I1115 10:36:41.470200  387591 system_pods.go:61] "kube-apiserver-newest-cni-086099" [3ca22829-f679-44bf-94e5-e4a368e13dcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:36:41.470210  387591 system_pods.go:61] "kube-controller-manager-newest-cni-086099" [1f45f32a-2d9e-49c0-9c69-d2aa59324564] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:36:41.470219  387591 system_pods.go:61] "kube-proxy-6jpzt" [7409c19f-472b-4074-81d0-8e43ac2bc9d4] Running
	I1115 10:36:41.470226  387591 system_pods.go:61] "kube-scheduler-newest-cni-086099" [c3510e0f-9b51-4fb5-bc6e-d0e47be8f5ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:36:41.470235  387591 system_pods.go:61] "storage-provisioner" [23166a3f-bb02-48ca-ab00-721c8c46525d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1115 10:36:41.470247  387591 system_pods.go:74] duration metric: took 3.560608ms to wait for pod list to return data ...
	I1115 10:36:41.470262  387591 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:36:41.472726  387591 default_sa.go:45] found service account: "default"
	I1115 10:36:41.472751  387591 default_sa.go:55] duration metric: took 2.478273ms for default service account to be created ...
	I1115 10:36:41.472765  387591 kubeadm.go:587] duration metric: took 5.253867745s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1115 10:36:41.472786  387591 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:36:41.475250  387591 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:36:41.475273  387591 node_conditions.go:123] node cpu capacity is 8
	I1115 10:36:41.475284  387591 node_conditions.go:105] duration metric: took 2.490696ms to run NodePressure ...
	I1115 10:36:41.475297  387591 start.go:242] waiting for startup goroutines ...
	I1115 10:36:41.475306  387591 start.go:247] waiting for cluster config update ...
	I1115 10:36:41.475322  387591 start.go:256] writing updated cluster config ...
	I1115 10:36:41.475622  387591 ssh_runner.go:195] Run: rm -f paused
	I1115 10:36:41.529383  387591 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:36:41.531753  387591 out.go:179] * Done! kubectl is now configured to use "newest-cni-086099" cluster and "default" namespace by default
	I1115 10:36:42.149798  388420 node_ready.go:49] node "default-k8s-diff-port-026691" is "Ready"
	I1115 10:36:42.149832  388420 node_ready.go:38] duration metric: took 2.87526393s for node "default-k8s-diff-port-026691" to be "Ready" ...
	I1115 10:36:42.149851  388420 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:36:42.149915  388420 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:36:43.654191  388420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.38943226s)
	I1115 10:36:43.654229  388420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.29621492s)
	I1115 10:36:43.654402  388420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.971169317s)
	I1115 10:36:43.654437  388420 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.50449925s)
	I1115 10:36:43.654474  388420 api_server.go:72] duration metric: took 4.78163246s to wait for apiserver process to appear ...
	I1115 10:36:43.654482  388420 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:36:43.654504  388420 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1115 10:36:43.655988  388420 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-026691 addons enable metrics-server
	
	I1115 10:36:43.659469  388420 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:36:43.659501  388420 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:36:43.660788  388420 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	
	
	==> CRI-O <==
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.424659063Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.427045481Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-6jpzt/POD" id=fbfe2f86-7411-4eb6-9014-45430d2a5cfe name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.42716381Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.430097698Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=6b7e8788-d22d-46c1-b18d-777d7b0bc391 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.430973215Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=fbfe2f86-7411-4eb6-9014-45430d2a5cfe name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.436012519Z" level=info msg="Ran pod sandbox 346ea9d5abba6e426c31383eb4ebefb7d0433b38b3adc9de1cf72f2f503f06be with infra container: kube-system/kindnet-2h7mm/POD" id=6b7e8788-d22d-46c1-b18d-777d7b0bc391 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.437463486Z" level=info msg="Ran pod sandbox 6b9a0f486e1136cc2b37740a110565e14407f000c0964255ea2a54da50666733 with infra container: kube-system/kube-proxy-6jpzt/POD" id=fbfe2f86-7411-4eb6-9014-45430d2a5cfe name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.438299823Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=2e888f2d-9940-4131-84c1-bf859332fb59 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.439858297Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=0c4d715c-09aa-4622-8cc2-cf519bebe33b name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.440524624Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=efa0ae6d-549f-49eb-ac2e-b3150c521f08 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.441684348Z" level=info msg="Creating container: kube-system/kindnet-2h7mm/kindnet-cni" id=6ac10152-47aa-40a4-83f5-c4e28de6a00f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.441774145Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.442101092Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=7dd17626-7116-4a93-947c-ed9982c33444 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.444740961Z" level=info msg="Creating container: kube-system/kube-proxy-6jpzt/kube-proxy" id=79d268c9-7d62-444c-994c-bc47147748b2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.445067198Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.446931153Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.447531314Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.452528673Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.514464246Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.534443583Z" level=info msg="Created container b75f03f3e085747d0020a39d05a5df311846f6ea3bfb6996c31eb63e75196968: kube-system/kindnet-2h7mm/kindnet-cni" id=6ac10152-47aa-40a4-83f5-c4e28de6a00f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.535143824Z" level=info msg="Starting container: b75f03f3e085747d0020a39d05a5df311846f6ea3bfb6996c31eb63e75196968" id=c4bfc0c4-0e5b-4087-a616-20fd4972f91d name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.537603641Z" level=info msg="Started container" PID=1159 containerID=b75f03f3e085747d0020a39d05a5df311846f6ea3bfb6996c31eb63e75196968 description=kube-system/kindnet-2h7mm/kindnet-cni id=c4bfc0c4-0e5b-4087-a616-20fd4972f91d name=/runtime.v1.RuntimeService/StartContainer sandboxID=346ea9d5abba6e426c31383eb4ebefb7d0433b38b3adc9de1cf72f2f503f06be
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.539739526Z" level=info msg="Created container fd0dca50b719947cba6525c14321a402534c19e3683222e6c08343aef639fcb8: kube-system/kube-proxy-6jpzt/kube-proxy" id=79d268c9-7d62-444c-994c-bc47147748b2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.540791575Z" level=info msg="Starting container: fd0dca50b719947cba6525c14321a402534c19e3683222e6c08343aef639fcb8" id=9bf6725a-14b6-48db-aad3-635499eac5c2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.544419368Z" level=info msg="Started container" PID=1160 containerID=fd0dca50b719947cba6525c14321a402534c19e3683222e6c08343aef639fcb8 description=kube-system/kube-proxy-6jpzt/kube-proxy id=9bf6725a-14b6-48db-aad3-635499eac5c2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6b9a0f486e1136cc2b37740a110565e14407f000c0964255ea2a54da50666733
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	fd0dca50b7199       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   4 seconds ago       Running             kube-proxy                1                   6b9a0f486e113       kube-proxy-6jpzt                            kube-system
	b75f03f3e0857       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   346ea9d5abba6       kindnet-2h7mm                               kube-system
	38ec6363bcab1       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   9 seconds ago       Running             kube-scheduler            1                   1dc8f60d5b838       kube-scheduler-newest-cni-086099            kube-system
	dcddb7cd9963b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   9 seconds ago       Running             kube-apiserver            1                   e1948784d69e4       kube-apiserver-newest-cni-086099            kube-system
	938d8a7a407d1       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   9 seconds ago       Running             kube-controller-manager   1                   b3233534b4f24       kube-controller-manager-newest-cni-086099   kube-system
	6799daac297c1       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   9 seconds ago       Running             etcd                      1                   4d385f8815122       etcd-newest-cni-086099                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-086099
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-086099
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=newest-cni-086099
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_36_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:36:15 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-086099
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:36:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:36:39 +0000   Sat, 15 Nov 2025 10:36:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:36:39 +0000   Sat, 15 Nov 2025 10:36:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:36:39 +0000   Sat, 15 Nov 2025 10:36:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 15 Nov 2025 10:36:39 +0000   Sat, 15 Nov 2025 10:36:12 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-086099
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                43538429-02c4-40c8-b533-c24bc0895325
	  Boot ID:                    6a33f504-3a8c-4d49-8fc5-301cf5275efc
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-086099                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         26s
	  kube-system                 kindnet-2h7mm                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      21s
	  kube-system                 kube-apiserver-newest-cni-086099             250m (3%)     0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-controller-manager-newest-cni-086099    200m (2%)     0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-proxy-6jpzt                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	  kube-system                 kube-scheduler-newest-cni-086099             100m (1%)     0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 19s                kube-proxy       
	  Normal   Starting                 3s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  33s (x8 over 33s)  kubelet          Node newest-cni-086099 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    33s (x8 over 33s)  kubelet          Node newest-cni-086099 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     33s (x8 over 33s)  kubelet          Node newest-cni-086099 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    26s                kubelet          Node newest-cni-086099 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 26s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  26s                kubelet          Node newest-cni-086099 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     26s                kubelet          Node newest-cni-086099 status is now: NodeHasSufficientPID
	  Normal   Starting                 26s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           22s                node-controller  Node newest-cni-086099 event: Registered Node newest-cni-086099 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 9s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9s (x8 over 9s)    kubelet          Node newest-cni-086099 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x8 over 9s)    kubelet          Node newest-cni-086099 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x8 over 9s)    kubelet          Node newest-cni-086099 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2s                 node-controller  Node newest-cni-086099 event: Registered Node newest-cni-086099 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 7f 53 f9 15 93 08 06
	[  +0.017821] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff c6 66 25 e9 45 28 08 06
	[ +19.178294] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 a3 65 b9 28 15 08 06
	[  +0.347929] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 6d df 33 a5 ad 08 06
	[  +0.000319] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff a2 7f 53 f9 15 93 08 06
	[Nov15 10:33] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 96 ed 10 e8 9a 80 08 06
	[  +0.000481] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 a3 65 b9 28 15 08 06
	[ +35.214217] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 c2 9a 56 52 0c 08 06
	[  +9.104720] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 c2 c4 d1 7c dd 08 06
	[Nov15 10:34] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 68 1e 80 53 05 08 06
	[  +0.000444] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 c2 c4 d1 7c dd 08 06
	[ +18.836046] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 98 db 78 4f 0b 08 06
	[  +0.000708] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 c2 9a 56 52 0c 08 06
	
	
	==> etcd [6799daac297c17fbe94a59a4b23a00cc23bdfe7671f3f31f803b074be4ef25a5] <==
	{"level":"warn","ts":"2025-11-15T10:36:38.490076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.495798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.514542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.522242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.529188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.535379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.542198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.553901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.568186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.575504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.584554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.598098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.604221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.610290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.622707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.629143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.635449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.642140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.660843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.669440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.677770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.694462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.701292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.708209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.788502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37052","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:36:44 up  2:19,  0 user,  load average: 3.43, 4.20, 2.79
	Linux newest-cni-086099 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b75f03f3e085747d0020a39d05a5df311846f6ea3bfb6996c31eb63e75196968] <==
	I1115 10:36:40.816911       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:36:40.817211       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1115 10:36:40.817433       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:36:40.817492       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:36:40.817547       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:36:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:36:40.969746       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:36:41.057277       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:36:41.057326       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:36:41.058121       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [dcddb7cd9963badc91f7602efc0c7dd3a6aa66928df5288774cb992a0f211b2c] <==
	I1115 10:36:39.520614       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1115 10:36:39.521165       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1115 10:36:39.521277       1 aggregator.go:171] initial CRD sync complete...
	I1115 10:36:39.521330       1 autoregister_controller.go:144] Starting autoregister controller
	I1115 10:36:39.521355       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 10:36:39.521378       1 cache.go:39] Caches are synced for autoregister controller
	I1115 10:36:39.524855       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1115 10:36:39.525244       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1115 10:36:39.532306       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1115 10:36:39.532342       1 policy_source.go:240] refreshing policies
	I1115 10:36:39.534468       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1115 10:36:39.536735       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1115 10:36:39.536809       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 10:36:39.616742       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 10:36:40.255985       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 10:36:40.419571       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:36:40.425466       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 10:36:40.529845       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 10:36:40.627415       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:36:40.636605       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:36:40.751232       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.120.70"}
	I1115 10:36:40.818272       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.36.81"}
	I1115 10:36:43.136817       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:36:43.285881       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 10:36:43.336346       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [938d8a7a407d113893b3543524a1f018292ab3c06ad38c37347f5c09b4f19aed] <==
	I1115 10:36:42.933064       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 10:36:42.933090       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1115 10:36:42.933230       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1115 10:36:42.934314       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 10:36:42.934334       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1115 10:36:42.934360       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1115 10:36:42.934453       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1115 10:36:42.935591       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 10:36:42.935687       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1115 10:36:42.936858       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1115 10:36:42.937997       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1115 10:36:42.938073       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1115 10:36:42.938131       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:36:42.939145       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 10:36:42.940719       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1115 10:36:42.946916       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1115 10:36:42.948585       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-086099"
	I1115 10:36:42.948666       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1115 10:36:42.948826       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1115 10:36:42.952363       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 10:36:42.955682       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:36:42.965887       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:36:42.984036       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:36:42.984057       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 10:36:42.984065       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [fd0dca50b719947cba6525c14321a402534c19e3683222e6c08343aef639fcb8] <==
	I1115 10:36:40.719802       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:36:40.848650       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:36:40.949095       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:36:40.949144       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1115 10:36:40.949277       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:36:41.018176       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:36:41.018231       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:36:41.024472       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:36:41.024887       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:36:41.024910       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:36:41.026438       1 config.go:200] "Starting service config controller"
	I1115 10:36:41.026463       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:36:41.026536       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:36:41.026557       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:36:41.026578       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:36:41.026593       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:36:41.026613       1 config.go:309] "Starting node config controller"
	I1115 10:36:41.026625       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:36:41.127470       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 10:36:41.127488       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 10:36:41.127517       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:36:41.127528       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [38ec6363bcab12e98490c10ae199bee3e96e76c3b252eb09eaacbceb79f4fee3] <==
	I1115 10:36:37.058787       1 serving.go:386] Generated self-signed cert in-memory
	I1115 10:36:39.548992       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1115 10:36:39.549025       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:36:39.620478       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:36:39.620490       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1115 10:36:39.620530       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:36:39.620537       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1115 10:36:39.620590       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:36:39.620602       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:36:39.620867       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 10:36:39.620978       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 10:36:39.721065       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:36:39.721080       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:36:39.721114       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Nov 15 10:36:37 newest-cni-086099 kubelet[789]: E1115 10:36:37.153946     789 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-086099\" not found" node="newest-cni-086099"
	Nov 15 10:36:38 newest-cni-086099 kubelet[789]: E1115 10:36:38.155869     789 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-086099\" not found" node="newest-cni-086099"
	Nov 15 10:36:38 newest-cni-086099 kubelet[789]: E1115 10:36:38.155941     789 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-086099\" not found" node="newest-cni-086099"
	Nov 15 10:36:39 newest-cni-086099 kubelet[789]: I1115 10:36:39.514532     789 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-086099"
	Nov 15 10:36:39 newest-cni-086099 kubelet[789]: I1115 10:36:39.615147     789 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-086099"
	Nov 15 10:36:39 newest-cni-086099 kubelet[789]: I1115 10:36:39.615366     789 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-086099"
	Nov 15 10:36:39 newest-cni-086099 kubelet[789]: I1115 10:36:39.615447     789 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 15 10:36:39 newest-cni-086099 kubelet[789]: I1115 10:36:39.617054     789 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 15 10:36:39 newest-cni-086099 kubelet[789]: E1115 10:36:39.634831     789 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-086099\" already exists" pod="kube-system/kube-controller-manager-newest-cni-086099"
	Nov 15 10:36:39 newest-cni-086099 kubelet[789]: I1115 10:36:39.634880     789 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-086099"
	Nov 15 10:36:39 newest-cni-086099 kubelet[789]: E1115 10:36:39.715821     789 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-086099\" already exists" pod="kube-system/kube-scheduler-newest-cni-086099"
	Nov 15 10:36:39 newest-cni-086099 kubelet[789]: I1115 10:36:39.715868     789 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-086099"
	Nov 15 10:36:39 newest-cni-086099 kubelet[789]: E1115 10:36:39.724836     789 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-086099\" already exists" pod="kube-system/etcd-newest-cni-086099"
	Nov 15 10:36:39 newest-cni-086099 kubelet[789]: I1115 10:36:39.724879     789 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-086099"
	Nov 15 10:36:39 newest-cni-086099 kubelet[789]: E1115 10:36:39.731796     789 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-086099\" already exists" pod="kube-system/kube-apiserver-newest-cni-086099"
	Nov 15 10:36:40 newest-cni-086099 kubelet[789]: I1115 10:36:40.116668     789 apiserver.go:52] "Watching apiserver"
	Nov 15 10:36:40 newest-cni-086099 kubelet[789]: I1115 10:36:40.170342     789 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 15 10:36:40 newest-cni-086099 kubelet[789]: I1115 10:36:40.243497     789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7409c19f-472b-4074-81d0-8e43ac2bc9d4-xtables-lock\") pod \"kube-proxy-6jpzt\" (UID: \"7409c19f-472b-4074-81d0-8e43ac2bc9d4\") " pod="kube-system/kube-proxy-6jpzt"
	Nov 15 10:36:40 newest-cni-086099 kubelet[789]: I1115 10:36:40.243553     789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1b25f4e6-5f26-42ce-8ceb-56003682c785-cni-cfg\") pod \"kindnet-2h7mm\" (UID: \"1b25f4e6-5f26-42ce-8ceb-56003682c785\") " pod="kube-system/kindnet-2h7mm"
	Nov 15 10:36:40 newest-cni-086099 kubelet[789]: I1115 10:36:40.243582     789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b25f4e6-5f26-42ce-8ceb-56003682c785-lib-modules\") pod \"kindnet-2h7mm\" (UID: \"1b25f4e6-5f26-42ce-8ceb-56003682c785\") " pod="kube-system/kindnet-2h7mm"
	Nov 15 10:36:40 newest-cni-086099 kubelet[789]: I1115 10:36:40.243631     789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b25f4e6-5f26-42ce-8ceb-56003682c785-xtables-lock\") pod \"kindnet-2h7mm\" (UID: \"1b25f4e6-5f26-42ce-8ceb-56003682c785\") " pod="kube-system/kindnet-2h7mm"
	Nov 15 10:36:40 newest-cni-086099 kubelet[789]: I1115 10:36:40.243657     789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7409c19f-472b-4074-81d0-8e43ac2bc9d4-lib-modules\") pod \"kube-proxy-6jpzt\" (UID: \"7409c19f-472b-4074-81d0-8e43ac2bc9d4\") " pod="kube-system/kube-proxy-6jpzt"
	Nov 15 10:36:42 newest-cni-086099 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 10:36:42 newest-cni-086099 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 10:36:42 newest-cni-086099 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-086099 -n newest-cni-086099
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-086099 -n newest-cni-086099: exit status 2 (342.333485ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-086099 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-rblh2 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-lwxxq kubernetes-dashboard-855c9754f9-r2tgx
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-086099 describe pod coredns-66bc5c9577-rblh2 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-lwxxq kubernetes-dashboard-855c9754f9-r2tgx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-086099 describe pod coredns-66bc5c9577-rblh2 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-lwxxq kubernetes-dashboard-855c9754f9-r2tgx: exit status 1 (61.293083ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-rblh2" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-lwxxq" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-r2tgx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-086099 describe pod coredns-66bc5c9577-rblh2 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-lwxxq kubernetes-dashboard-855c9754f9-r2tgx: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-086099
helpers_test.go:243: (dbg) docker inspect newest-cni-086099:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e6860e06d975adb47ad2079893175972a79fb3e0d03ee7ef837f4b7f29e21e14",
	        "Created": "2025-11-15T10:35:57.263723596Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 387791,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:36:29.158751083Z",
	            "FinishedAt": "2025-11-15T10:36:27.895734196Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/e6860e06d975adb47ad2079893175972a79fb3e0d03ee7ef837f4b7f29e21e14/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e6860e06d975adb47ad2079893175972a79fb3e0d03ee7ef837f4b7f29e21e14/hostname",
	        "HostsPath": "/var/lib/docker/containers/e6860e06d975adb47ad2079893175972a79fb3e0d03ee7ef837f4b7f29e21e14/hosts",
	        "LogPath": "/var/lib/docker/containers/e6860e06d975adb47ad2079893175972a79fb3e0d03ee7ef837f4b7f29e21e14/e6860e06d975adb47ad2079893175972a79fb3e0d03ee7ef837f4b7f29e21e14-json.log",
	        "Name": "/newest-cni-086099",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-086099:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-086099",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e6860e06d975adb47ad2079893175972a79fb3e0d03ee7ef837f4b7f29e21e14",
	                "LowerDir": "/var/lib/docker/overlay2/9482d5d3fc87916f5559bace41af8ac4799d27e44564678531b92a41e1b27eaf-init/diff:/var/lib/docker/overlay2/507c85b6fb43d9c98216fac79d7a8d08dd20b2a63d0fbfc758336e5d38c04044/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9482d5d3fc87916f5559bace41af8ac4799d27e44564678531b92a41e1b27eaf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9482d5d3fc87916f5559bace41af8ac4799d27e44564678531b92a41e1b27eaf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9482d5d3fc87916f5559bace41af8ac4799d27e44564678531b92a41e1b27eaf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-086099",
	                "Source": "/var/lib/docker/volumes/newest-cni-086099/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-086099",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-086099",
	                "name.minikube.sigs.k8s.io": "newest-cni-086099",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b9413fb79e83e8ccdf506b9c4daaf44e59c92c01504bb3ca5c4abfd806186f2b",
	            "SandboxKey": "/var/run/docker/netns/b9413fb79e83",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-086099": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "09708c4610e17a8aeca1147b11bbc4d170ab97359e0b99b5bd4de917c0e4fd72",
	                    "EndpointID": "17eb7e5e13e076ef0d5938b085776d0625af70ec325f7e005124647b424970fd",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "3e:2e:54:d1:1d:87",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-086099",
	                        "e6860e06d975"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-086099 -n newest-cni-086099
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-086099 -n newest-cni-086099: exit status 2 (345.304116ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-086099 logs -n 25
E1115 10:36:46.379888   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kindnet-931243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-086099 logs -n 25: (1.003422829s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p no-preload-283677 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ start   │ -p no-preload-283677 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:35 UTC │
	│ addons  │ enable metrics-server -p embed-certs-719574 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ stop    │ -p embed-certs-719574 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ image   │ old-k8s-version-087235 image list --format=json                                                                                                                                                                                               │ old-k8s-version-087235       │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ pause   │ -p old-k8s-version-087235 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-087235       │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ delete  │ -p old-k8s-version-087235                                                                                                                                                                                                                     │ old-k8s-version-087235       │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ addons  │ enable dashboard -p embed-certs-719574 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ start   │ -p embed-certs-719574 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:36 UTC │
	│ delete  │ -p old-k8s-version-087235                                                                                                                                                                                                                     │ old-k8s-version-087235       │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ start   │ -p newest-cni-086099 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:36 UTC │
	│ image   │ no-preload-283677 image list --format=json                                                                                                                                                                                                    │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ pause   │ -p no-preload-283677 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ delete  │ -p no-preload-283677                                                                                                                                                                                                                          │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ delete  │ -p no-preload-283677                                                                                                                                                                                                                          │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-026691 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-026691 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-026691 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-026691 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ addons  │ enable metrics-server -p newest-cni-086099 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ stop    │ -p newest-cni-086099 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ addons  │ enable dashboard -p newest-cni-086099 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ start   │ -p newest-cni-086099 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-026691 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-026691 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ start   │ -p default-k8s-diff-port-026691 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-026691 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ image   │ newest-cni-086099 image list --format=json                                                                                                                                                                                                    │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ pause   │ -p newest-cni-086099 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:36:31
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:36:31.193182  388420 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:36:31.193281  388420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:36:31.193289  388420 out.go:374] Setting ErrFile to fd 2...
	I1115 10:36:31.193293  388420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:36:31.193515  388420 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 10:36:31.193933  388420 out.go:368] Setting JSON to false
	I1115 10:36:31.195111  388420 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8328,"bootTime":1763194663,"procs":266,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 10:36:31.195216  388420 start.go:143] virtualization: kvm guest
	I1115 10:36:31.196894  388420 out.go:179] * [default-k8s-diff-port-026691] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 10:36:31.198076  388420 notify.go:221] Checking for updates...
	I1115 10:36:31.198087  388420 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 10:36:31.199249  388420 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:36:31.200471  388420 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:36:31.201512  388420 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-55448/.minikube
	I1115 10:36:31.202449  388420 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 10:36:31.203634  388420 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:36:31.205205  388420 config.go:182] Loaded profile config "default-k8s-diff-port-026691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:31.205718  388420 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:36:31.228892  388420 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 10:36:31.229044  388420 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:36:31.285898  388420 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:67 SystemTime:2025-11-15 10:36:31.276283811 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:36:31.286032  388420 docker.go:319] overlay module found
	I1115 10:36:31.287655  388420 out.go:179] * Using the docker driver based on existing profile
	I1115 10:36:31.288859  388420 start.go:309] selected driver: docker
	I1115 10:36:31.288877  388420 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-026691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-026691 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:36:31.288972  388420 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:36:31.289812  388420 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:36:31.352009  388420 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:66 SystemTime:2025-11-15 10:36:31.342104199 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:36:31.352371  388420 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:36:31.352408  388420 cni.go:84] Creating CNI manager for ""
	I1115 10:36:31.352457  388420 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:36:31.352498  388420 start.go:353] cluster config:
	{Name:default-k8s-diff-port-026691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-026691 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:36:31.354418  388420 out.go:179] * Starting "default-k8s-diff-port-026691" primary control-plane node in "default-k8s-diff-port-026691" cluster
	I1115 10:36:31.355595  388420 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:36:31.356825  388420 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:36:31.357856  388420 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:36:31.357890  388420 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1115 10:36:31.357905  388420 cache.go:65] Caching tarball of preloaded images
	I1115 10:36:31.357944  388420 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:36:31.358020  388420 preload.go:238] Found /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 10:36:31.358036  388420 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:36:31.358136  388420 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/config.json ...
	I1115 10:36:31.378843  388420 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:36:31.378864  388420 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:36:31.378881  388420 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:36:31.378904  388420 start.go:360] acquireMachinesLock for default-k8s-diff-port-026691: {Name:mk1f3196dd9a24a043fa707553211d0b0ea8c1f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:36:31.378986  388420 start.go:364] duration metric: took 61.257µs to acquireMachinesLock for "default-k8s-diff-port-026691"
	I1115 10:36:31.379010  388420 start.go:96] Skipping create...Using existing machine configuration
	I1115 10:36:31.379018  388420 fix.go:54] fixHost starting: 
	I1115 10:36:31.379252  388420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:36:31.397025  388420 fix.go:112] recreateIfNeeded on default-k8s-diff-port-026691: state=Stopped err=<nil>
	W1115 10:36:31.397068  388420 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 10:36:29.135135  387591 out.go:252] * Restarting existing docker container for "newest-cni-086099" ...
	I1115 10:36:29.135222  387591 cli_runner.go:164] Run: docker start newest-cni-086099
	I1115 10:36:29.412428  387591 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:36:29.431258  387591 kic.go:430] container "newest-cni-086099" state is running.
	I1115 10:36:29.431760  387591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-086099
	I1115 10:36:29.450271  387591 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/config.json ...
	I1115 10:36:29.450487  387591 machine.go:94] provisionDockerMachine start ...
	I1115 10:36:29.450542  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:29.468796  387591 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:29.469141  387591 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1115 10:36:29.469158  387591 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:36:29.469768  387591 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43374->127.0.0.1:33129: read: connection reset by peer
	I1115 10:36:32.597021  387591 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-086099
	
	I1115 10:36:32.597063  387591 ubuntu.go:182] provisioning hostname "newest-cni-086099"
	I1115 10:36:32.597140  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:32.616934  387591 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:32.617209  387591 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1115 10:36:32.617233  387591 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-086099 && echo "newest-cni-086099" | sudo tee /etc/hostname
	I1115 10:36:32.756237  387591 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-086099
	
	I1115 10:36:32.756329  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:32.775168  387591 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:32.775389  387591 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1115 10:36:32.775405  387591 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-086099' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-086099/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-086099' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:36:32.902668  387591 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:36:32.902701  387591 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-55448/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-55448/.minikube}
	I1115 10:36:32.902736  387591 ubuntu.go:190] setting up certificates
	I1115 10:36:32.902754  387591 provision.go:84] configureAuth start
	I1115 10:36:32.902811  387591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-086099
	I1115 10:36:32.921923  387591 provision.go:143] copyHostCerts
	I1115 10:36:32.922017  387591 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem, removing ...
	I1115 10:36:32.922035  387591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem
	I1115 10:36:32.922102  387591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem (1082 bytes)
	I1115 10:36:32.922216  387591 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem, removing ...
	I1115 10:36:32.922225  387591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem
	I1115 10:36:32.922253  387591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem (1123 bytes)
	I1115 10:36:32.922341  387591 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem, removing ...
	I1115 10:36:32.922348  387591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem
	I1115 10:36:32.922372  387591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem (1679 bytes)
	I1115 10:36:32.922421  387591 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem org=jenkins.newest-cni-086099 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-086099]
	I1115 10:36:32.940854  387591 provision.go:177] copyRemoteCerts
	I1115 10:36:32.940914  387591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:36:32.940948  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:32.958931  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:33.053731  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:36:33.071243  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:36:33.088651  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1115 10:36:33.105219  387591 provision.go:87] duration metric: took 202.453369ms to configureAuth
	I1115 10:36:33.105244  387591 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:36:33.105414  387591 config.go:182] Loaded profile config "newest-cni-086099": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:33.105509  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:33.123012  387591 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:33.123259  387591 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1115 10:36:33.123277  387591 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:36:33.389799  387591 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:36:33.389822  387591 machine.go:97] duration metric: took 3.93932207s to provisionDockerMachine
	I1115 10:36:33.389835  387591 start.go:293] postStartSetup for "newest-cni-086099" (driver="docker")
	I1115 10:36:33.389844  387591 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:36:33.389903  387591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:36:33.389946  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:33.409403  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:33.503330  387591 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:36:33.506790  387591 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:36:33.506815  387591 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:36:33.506825  387591 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/addons for local assets ...
	I1115 10:36:33.506878  387591 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/files for local assets ...
	I1115 10:36:33.506995  387591 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem -> 589622.pem in /etc/ssl/certs
	I1115 10:36:33.507126  387591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:36:33.514570  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:36:33.531880  387591 start.go:296] duration metric: took 142.028023ms for postStartSetup
	I1115 10:36:33.532012  387591 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:36:33.532066  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:33.549908  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:33.640348  387591 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:36:33.645124  387591 fix.go:56] duration metric: took 4.529931109s for fixHost
	I1115 10:36:33.645164  387591 start.go:83] releasing machines lock for "newest-cni-086099", held for 4.529982501s
	I1115 10:36:33.645246  387591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-086099
	I1115 10:36:33.663364  387591 ssh_runner.go:195] Run: cat /version.json
	I1115 10:36:33.663400  387591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:36:33.663445  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:33.663461  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:33.682200  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:33.682521  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:33.827221  387591 ssh_runner.go:195] Run: systemctl --version
	I1115 10:36:33.834019  387591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:36:33.868151  387591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:36:33.872995  387591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:36:33.873067  387591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:36:33.881540  387591 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 10:36:33.881563  387591 start.go:496] detecting cgroup driver to use...
	I1115 10:36:33.881595  387591 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:36:33.881628  387591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:36:33.895704  387591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:36:33.907633  387591 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:36:33.907681  387591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:36:33.921408  387591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	W1115 10:36:30.745845  377744 pod_ready.go:104] pod "coredns-66bc5c9577-fjzk5" is not "Ready", error: <nil>
	W1115 10:36:32.746544  377744 pod_ready.go:104] pod "coredns-66bc5c9577-fjzk5" is not "Ready", error: <nil>
	I1115 10:36:33.933689  387591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:36:34.015025  387591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:36:34.097166  387591 docker.go:234] disabling docker service ...
	I1115 10:36:34.097250  387591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:36:34.111501  387591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:36:34.123898  387591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:36:34.208076  387591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:36:34.289077  387591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:36:34.302010  387591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:36:34.316333  387591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:36:34.316409  387591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.325113  387591 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:36:34.325175  387591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.333844  387591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.342343  387591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.350817  387591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:36:34.359269  387591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.368008  387591 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.376100  387591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.384822  387591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:36:34.392091  387591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:36:34.399149  387591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:34.478616  387591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:36:34.580323  387591 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:36:34.580408  387591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:36:34.584509  387591 start.go:564] Will wait 60s for crictl version
	I1115 10:36:34.584568  387591 ssh_runner.go:195] Run: which crictl
	I1115 10:36:34.588078  387591 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:36:34.613070  387591 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:36:34.613150  387591 ssh_runner.go:195] Run: crio --version
	I1115 10:36:34.641080  387591 ssh_runner.go:195] Run: crio --version
	I1115 10:36:34.670335  387591 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:36:34.671690  387591 cli_runner.go:164] Run: docker network inspect newest-cni-086099 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:36:34.689678  387591 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1115 10:36:34.693973  387591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:36:34.705342  387591 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1115 10:36:31.398937  388420 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-026691" ...
	I1115 10:36:31.399016  388420 cli_runner.go:164] Run: docker start default-k8s-diff-port-026691
	I1115 10:36:31.676189  388420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:36:31.694382  388420 kic.go:430] container "default-k8s-diff-port-026691" state is running.
	I1115 10:36:31.694751  388420 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-026691
	I1115 10:36:31.713425  388420 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/config.json ...
	I1115 10:36:31.713652  388420 machine.go:94] provisionDockerMachine start ...
	I1115 10:36:31.713746  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:31.732991  388420 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:31.733252  388420 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1115 10:36:31.733277  388420 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:36:31.734038  388420 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45950->127.0.0.1:33134: read: connection reset by peer
	I1115 10:36:34.867843  388420 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-026691
	
	I1115 10:36:34.867883  388420 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-026691"
	I1115 10:36:34.868072  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:34.887800  388420 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:34.888079  388420 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1115 10:36:34.888098  388420 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-026691 && echo "default-k8s-diff-port-026691" | sudo tee /etc/hostname
	I1115 10:36:35.027312  388420 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-026691
	
	I1115 10:36:35.027402  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:35.049307  388420 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:35.049620  388420 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1115 10:36:35.049653  388420 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-026691' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-026691/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-026691' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:36:35.185792  388420 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:36:35.185824  388420 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-55448/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-55448/.minikube}
	I1115 10:36:35.185877  388420 ubuntu.go:190] setting up certificates
	I1115 10:36:35.185889  388420 provision.go:84] configureAuth start
	I1115 10:36:35.185975  388420 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-026691
	I1115 10:36:35.205215  388420 provision.go:143] copyHostCerts
	I1115 10:36:35.205302  388420 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem, removing ...
	I1115 10:36:35.205325  388420 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem
	I1115 10:36:35.205419  388420 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem (1082 bytes)
	I1115 10:36:35.205578  388420 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem, removing ...
	I1115 10:36:35.205600  388420 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem
	I1115 10:36:35.205648  388420 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem (1123 bytes)
	I1115 10:36:35.205811  388420 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem, removing ...
	I1115 10:36:35.205831  388420 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem
	I1115 10:36:35.205877  388420 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem (1679 bytes)
	I1115 10:36:35.205988  388420 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-026691 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-026691 localhost minikube]
	I1115 10:36:35.356382  388420 provision.go:177] copyRemoteCerts
	I1115 10:36:35.356441  388420 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:36:35.356476  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:35.375752  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:35.470476  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:36:35.488150  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1115 10:36:35.505264  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:36:35.522854  388420 provision.go:87] duration metric: took 336.947608ms to configureAuth
	I1115 10:36:35.522880  388420 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:36:35.523120  388420 config.go:182] Loaded profile config "default-k8s-diff-port-026691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:35.523282  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:35.543167  388420 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:35.543480  388420 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1115 10:36:35.543509  388420 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:36:35.848476  388420 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:36:35.848509  388420 machine.go:97] duration metric: took 4.134839636s to provisionDockerMachine
	I1115 10:36:35.848525  388420 start.go:293] postStartSetup for "default-k8s-diff-port-026691" (driver="docker")
	I1115 10:36:35.848541  388420 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:36:35.848616  388420 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:36:35.848671  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:35.868537  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:35.963605  388420 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:36:35.967175  388420 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:36:35.967199  388420 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:36:35.967209  388420 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/addons for local assets ...
	I1115 10:36:35.967263  388420 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/files for local assets ...
	I1115 10:36:35.967339  388420 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem -> 589622.pem in /etc/ssl/certs
	I1115 10:36:35.967422  388420 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:36:35.975404  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:36:35.992754  388420 start.go:296] duration metric: took 144.211835ms for postStartSetup
	I1115 10:36:35.992851  388420 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:36:35.992902  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:36.010853  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:36.106652  388420 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:36:36.111301  388420 fix.go:56] duration metric: took 4.732276816s for fixHost
	I1115 10:36:36.111327  388420 start.go:83] releasing machines lock for "default-k8s-diff-port-026691", held for 4.732326241s
	I1115 10:36:36.111401  388420 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-026691
	I1115 10:36:36.133087  388420 ssh_runner.go:195] Run: cat /version.json
	I1115 10:36:36.133147  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:36.133224  388420 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:36:36.133295  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:36.161597  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:36.162169  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:34.706341  387591 kubeadm.go:884] updating cluster {Name:newest-cni-086099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-086099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:36:34.706463  387591 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:36:34.706520  387591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:36:34.737832  387591 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:36:34.737871  387591 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:36:34.737929  387591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:36:34.765628  387591 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:36:34.765650  387591 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:36:34.765657  387591 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1115 10:36:34.765750  387591 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-086099 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-086099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:36:34.765813  387591 ssh_runner.go:195] Run: crio config
	I1115 10:36:34.812764  387591 cni.go:84] Creating CNI manager for ""
	I1115 10:36:34.812787  387591 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:36:34.812806  387591 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1115 10:36:34.812836  387591 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-086099 NodeName:newest-cni-086099 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:36:34.813018  387591 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-086099"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:36:34.813097  387591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:36:34.821514  387591 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:36:34.821582  387591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:36:34.829425  387591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1115 10:36:34.841803  387591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:36:34.854099  387591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1115 10:36:34.867123  387591 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:36:34.871300  387591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:36:34.882157  387591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:34.965624  387591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:36:34.991396  387591 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099 for IP: 192.168.103.2
	I1115 10:36:34.991421  387591 certs.go:195] generating shared ca certs ...
	I1115 10:36:34.991442  387591 certs.go:227] acquiring lock for ca certs: {Name:mkec8c0ab2b564f4fafe1f7fc86089efa994f6db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:34.991611  387591 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key
	I1115 10:36:34.991670  387591 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key
	I1115 10:36:34.991685  387591 certs.go:257] generating profile certs ...
	I1115 10:36:34.991800  387591 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/client.key
	I1115 10:36:34.991881  387591 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.key.d719cdad
	I1115 10:36:34.991938  387591 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.key
	I1115 10:36:34.992114  387591 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem (1338 bytes)
	W1115 10:36:34.992160  387591 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962_empty.pem, impossibly tiny 0 bytes
	I1115 10:36:34.992182  387591 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:36:34.992223  387591 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:36:34.992266  387591 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:36:34.992298  387591 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem (1679 bytes)
	I1115 10:36:34.992360  387591 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:36:34.993060  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:36:35.012346  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:36:35.032525  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:36:35.052616  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:36:35.116969  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1115 10:36:35.141400  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 10:36:35.160318  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:36:35.178367  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:36:35.231343  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /usr/share/ca-certificates/589622.pem (1708 bytes)
	I1115 10:36:35.251073  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:36:35.269574  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem --> /usr/share/ca-certificates/58962.pem (1338 bytes)
	I1115 10:36:35.287839  387591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:36:35.300609  387591 ssh_runner.go:195] Run: openssl version
	I1115 10:36:35.306757  387591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/58962.pem && ln -fs /usr/share/ca-certificates/58962.pem /etc/ssl/certs/58962.pem"
	I1115 10:36:35.315111  387591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/58962.pem
	I1115 10:36:35.318673  387591 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:47 /usr/share/ca-certificates/58962.pem
	I1115 10:36:35.318726  387591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/58962.pem
	I1115 10:36:35.352595  387591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/58962.pem /etc/ssl/certs/51391683.0"
	I1115 10:36:35.360661  387591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/589622.pem && ln -fs /usr/share/ca-certificates/589622.pem /etc/ssl/certs/589622.pem"
	I1115 10:36:35.369044  387591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/589622.pem
	I1115 10:36:35.373102  387591 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:47 /usr/share/ca-certificates/589622.pem
	I1115 10:36:35.373149  387591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/589622.pem
	I1115 10:36:35.407763  387591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/589622.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:36:35.416805  387591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:36:35.426105  387591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:35.429879  387591 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:41 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:35.429928  387591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:35.464376  387591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:36:35.472689  387591 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:36:35.476537  387591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 10:36:35.513422  387591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 10:36:35.552107  387591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 10:36:35.627892  387591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 10:36:35.738207  387591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 10:36:35.927631  387591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 10:36:36.020791  387591 kubeadm.go:401] StartCluster: {Name:newest-cni-086099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-086099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:36:36.020915  387591 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:36:36.020993  387591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:36:36.054712  387591 cri.go:89] found id: "38ec6363bcab12e98490c10ae199bee3e96e76c3b252eb09eaacbceb79f4fee3"
	I1115 10:36:36.054741  387591 cri.go:89] found id: "dcddb7cd9963badc91f7602efc0c7dd3a6aa66928df5288774cb992a0f211b2c"
	I1115 10:36:36.054748  387591 cri.go:89] found id: "938d8a7a407d113893b3543524a1f018292ab3c06ad38c37347f5c09b4f19aed"
	I1115 10:36:36.054753  387591 cri.go:89] found id: "6799daac297c17fbe94a59a4b23a00cc23bdfe7671f3f31f803b074be4ef25a5"
	I1115 10:36:36.054758  387591 cri.go:89] found id: ""
	I1115 10:36:36.054810  387591 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 10:36:36.122342  387591 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:36:36Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:36:36.122434  387591 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:36:36.132788  387591 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 10:36:36.132807  387591 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 10:36:36.132853  387591 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 10:36:36.144175  387591 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:36:36.145209  387591 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-086099" does not appear in /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:36:36.145870  387591 kubeconfig.go:62] /home/jenkins/minikube-integration/21894-55448/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-086099" cluster setting kubeconfig missing "newest-cni-086099" context setting]
	I1115 10:36:36.146847  387591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:36.149871  387591 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 10:36:36.217177  387591 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1115 10:36:36.217217  387591 kubeadm.go:602] duration metric: took 84.40299ms to restartPrimaryControlPlane
	I1115 10:36:36.217231  387591 kubeadm.go:403] duration metric: took 196.454161ms to StartCluster
	I1115 10:36:36.217253  387591 settings.go:142] acquiring lock: {Name:mk94cfac0f6eef2479181468a7a8082a6e5c8f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:36.217343  387591 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:36:36.218632  387591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:36.218872  387591 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:36:36.218972  387591 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:36:36.219074  387591 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-086099"
	I1115 10:36:36.219094  387591 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-086099"
	W1115 10:36:36.219105  387591 addons.go:248] addon storage-provisioner should already be in state true
	I1115 10:36:36.219138  387591 host.go:66] Checking if "newest-cni-086099" exists ...
	I1115 10:36:36.219158  387591 addons.go:70] Setting dashboard=true in profile "newest-cni-086099"
	I1115 10:36:36.219163  387591 config.go:182] Loaded profile config "newest-cni-086099": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:36.219193  387591 addons.go:239] Setting addon dashboard=true in "newest-cni-086099"
	W1115 10:36:36.219202  387591 addons.go:248] addon dashboard should already be in state true
	I1115 10:36:36.219217  387591 addons.go:70] Setting default-storageclass=true in profile "newest-cni-086099"
	I1115 10:36:36.219235  387591 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-086099"
	I1115 10:36:36.219248  387591 host.go:66] Checking if "newest-cni-086099" exists ...
	I1115 10:36:36.219557  387591 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:36:36.219712  387591 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:36:36.219712  387591 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:36:36.220680  387591 out.go:179] * Verifying Kubernetes components...
	I1115 10:36:36.221665  387591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:36.248161  387591 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:36:36.248172  387591 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1115 10:36:36.249608  387591 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:36:36.249628  387591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:36:36.249683  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:36.249733  387591 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1115 10:36:36.324481  388420 ssh_runner.go:195] Run: systemctl --version
	I1115 10:36:36.336623  388420 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:36:36.372576  388420 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:36:36.377572  388420 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:36:36.377633  388420 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:36:36.385687  388420 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 10:36:36.385710  388420 start.go:496] detecting cgroup driver to use...
	I1115 10:36:36.385740  388420 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:36:36.385776  388420 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:36:36.399728  388420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:36:36.411622  388420 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:36:36.411694  388420 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:36:36.431786  388420 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:36:36.449270  388420 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:36:36.538378  388420 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:36:36.622459  388420 docker.go:234] disabling docker service ...
	I1115 10:36:36.622563  388420 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:36:36.644022  388420 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:36:36.656349  388420 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:36:36.757453  388420 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:36:36.851752  388420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:36:36.864024  388420 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:36:36.878189  388420 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:36:36.878243  388420 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.886869  388420 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:36:36.886944  388420 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.895649  388420 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.904129  388420 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.912660  388420 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:36:36.922601  388420 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.934730  388420 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.945527  388420 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.955227  388420 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:36:36.962702  388420 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:36:36.969927  388420 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:37.064102  388420 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:36:37.181392  388420 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:36:37.181469  388420 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:36:37.185705  388420 start.go:564] Will wait 60s for crictl version
	I1115 10:36:37.185759  388420 ssh_runner.go:195] Run: which crictl
	I1115 10:36:37.189374  388420 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:36:37.214797  388420 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:36:37.214872  388420 ssh_runner.go:195] Run: crio --version
	I1115 10:36:37.247024  388420 ssh_runner.go:195] Run: crio --version
	I1115 10:36:37.283127  388420 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1115 10:36:35.246243  377744 pod_ready.go:104] pod "coredns-66bc5c9577-fjzk5" is not "Ready", error: <nil>
	I1115 10:36:37.246256  377744 pod_ready.go:94] pod "coredns-66bc5c9577-fjzk5" is "Ready"
	I1115 10:36:37.246283  377744 pod_ready.go:86] duration metric: took 33.505674032s for pod "coredns-66bc5c9577-fjzk5" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.248931  377744 pod_ready.go:83] waiting for pod "etcd-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.253449  377744 pod_ready.go:94] pod "etcd-embed-certs-719574" is "Ready"
	I1115 10:36:37.253477  377744 pod_ready.go:86] duration metric: took 4.523106ms for pod "etcd-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.258749  377744 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.262996  377744 pod_ready.go:94] pod "kube-apiserver-embed-certs-719574" is "Ready"
	I1115 10:36:37.263019  377744 pod_ready.go:86] duration metric: took 4.2473ms for pod "kube-apiserver-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.265400  377744 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.444138  377744 pod_ready.go:94] pod "kube-controller-manager-embed-certs-719574" is "Ready"
	I1115 10:36:37.444168  377744 pod_ready.go:86] duration metric: took 178.743562ms for pod "kube-controller-manager-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.644722  377744 pod_ready.go:83] waiting for pod "kube-proxy-kmc8c" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:38.044247  377744 pod_ready.go:94] pod "kube-proxy-kmc8c" is "Ready"
	I1115 10:36:38.044277  377744 pod_ready.go:86] duration metric: took 399.527336ms for pod "kube-proxy-kmc8c" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:38.245350  377744 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:38.644894  377744 pod_ready.go:94] pod "kube-scheduler-embed-certs-719574" is "Ready"
	I1115 10:36:38.645014  377744 pod_ready.go:86] duration metric: took 399.62796ms for pod "kube-scheduler-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:38.645030  377744 pod_ready.go:40] duration metric: took 34.90782271s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:36:38.702511  377744 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:36:38.706562  377744 out.go:179] * Done! kubectl is now configured to use "embed-certs-719574" cluster and "default" namespace by default
	I1115 10:36:37.284492  388420 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-026691 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:36:37.302095  388420 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1115 10:36:37.306321  388420 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:36:37.316768  388420 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-026691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-026691 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:36:37.316911  388420 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:36:37.316980  388420 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:36:37.354039  388420 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:36:37.354063  388420 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:36:37.354121  388420 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:36:37.384223  388420 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:36:37.384249  388420 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:36:37.384257  388420 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1115 10:36:37.384353  388420 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-026691 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-026691 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:36:37.384416  388420 ssh_runner.go:195] Run: crio config
	I1115 10:36:37.429588  388420 cni.go:84] Creating CNI manager for ""
	I1115 10:36:37.429616  388420 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:36:37.429637  388420 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:36:37.429663  388420 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-026691 NodeName:default-k8s-diff-port-026691 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:36:37.429840  388420 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-026691"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:36:37.429922  388420 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:36:37.438488  388420 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:36:37.438583  388420 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:36:37.446984  388420 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1115 10:36:37.459608  388420 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:36:37.472652  388420 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1115 10:36:37.484924  388420 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:36:37.488541  388420 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:36:37.498126  388420 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:37.587175  388420 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:36:37.609456  388420 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691 for IP: 192.168.85.2
	I1115 10:36:37.609480  388420 certs.go:195] generating shared ca certs ...
	I1115 10:36:37.609501  388420 certs.go:227] acquiring lock for ca certs: {Name:mkec8c0ab2b564f4fafe1f7fc86089efa994f6db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:37.609671  388420 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key
	I1115 10:36:37.609735  388420 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key
	I1115 10:36:37.609750  388420 certs.go:257] generating profile certs ...
	I1115 10:36:37.609859  388420 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/client.key
	I1115 10:36:37.609921  388420 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.key.f8824eec
	I1115 10:36:37.610007  388420 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.key
	I1115 10:36:37.610146  388420 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem (1338 bytes)
	W1115 10:36:37.610198  388420 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962_empty.pem, impossibly tiny 0 bytes
	I1115 10:36:37.610212  388420 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:36:37.610244  388420 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:36:37.610278  388420 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:36:37.610306  388420 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem (1679 bytes)
	I1115 10:36:37.610359  388420 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:36:37.611122  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:36:37.629925  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:36:37.650833  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:36:37.671862  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:36:37.696427  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1115 10:36:37.763348  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 10:36:37.782654  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:36:37.800720  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:36:37.817628  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:36:37.835327  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem --> /usr/share/ca-certificates/58962.pem (1338 bytes)
	I1115 10:36:37.856769  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /usr/share/ca-certificates/589622.pem (1708 bytes)
	I1115 10:36:37.876039  388420 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:36:37.891255  388420 ssh_runner.go:195] Run: openssl version
	I1115 10:36:37.898994  388420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/58962.pem && ln -fs /usr/share/ca-certificates/58962.pem /etc/ssl/certs/58962.pem"
	I1115 10:36:37.907571  388420 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/58962.pem
	I1115 10:36:37.912280  388420 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:47 /usr/share/ca-certificates/58962.pem
	I1115 10:36:37.912337  388420 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/58962.pem
	I1115 10:36:37.950692  388420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/58962.pem /etc/ssl/certs/51391683.0"
	I1115 10:36:37.959456  388420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/589622.pem && ln -fs /usr/share/ca-certificates/589622.pem /etc/ssl/certs/589622.pem"
	I1115 10:36:37.968450  388420 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/589622.pem
	I1115 10:36:37.972465  388420 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:47 /usr/share/ca-certificates/589622.pem
	I1115 10:36:37.972521  388420 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/589622.pem
	I1115 10:36:38.008129  388420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/589622.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:36:38.016745  388420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:36:38.027414  388420 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:38.031718  388420 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:41 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:38.031792  388420 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:38.077405  388420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:36:38.086004  388420 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:36:38.089990  388420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 10:36:38.127939  388420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 10:36:38.181791  388420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 10:36:38.256153  388420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 10:36:38.368577  388420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 10:36:38.543333  388420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 10:36:38.645754  388420 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-026691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-026691 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:36:38.645863  388420 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:36:38.645935  388420 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:36:38.685210  388420 cri.go:89] found id: "b04411b3a023360282fda35601339f2000829b38e465e5c4c130b7c58e111bb3"
	I1115 10:36:38.685237  388420 cri.go:89] found id: "971b8e4c2073b59c0dd66594f19cfc1d0b9f81cb1bad24b21946d5ea7b012768"
	I1115 10:36:38.685254  388420 cri.go:89] found id: "6a1db649ea51d34c5661db65312d1c8660828376983f865ce0b3d2801b219c2d"
	I1115 10:36:38.685259  388420 cri.go:89] found id: "58595dd2cf4ce1cb8f740a6594f3ee7a3c7d2587682b4e6c526266c0067303a7"
	I1115 10:36:38.685262  388420 cri.go:89] found id: ""
	I1115 10:36:38.685312  388420 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 10:36:38.750674  388420 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:36:38Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:36:38.750744  388420 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:36:38.769157  388420 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 10:36:38.769186  388420 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 10:36:38.769238  388420 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 10:36:38.842499  388420 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:36:38.845337  388420 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-026691" does not appear in /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:36:38.846840  388420 kubeconfig.go:62] /home/jenkins/minikube-integration/21894-55448/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-026691" cluster setting kubeconfig missing "default-k8s-diff-port-026691" context setting]
	I1115 10:36:38.849516  388420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:38.855210  388420 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 10:36:38.870026  388420 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1115 10:36:38.870059  388420 kubeadm.go:602] duration metric: took 100.86647ms to restartPrimaryControlPlane
	I1115 10:36:38.870073  388420 kubeadm.go:403] duration metric: took 224.328768ms to StartCluster
	I1115 10:36:38.870094  388420 settings.go:142] acquiring lock: {Name:mk94cfac0f6eef2479181468a7a8082a6e5c8f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:38.870172  388420 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:36:38.872536  388420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:38.872812  388420 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:36:38.873059  388420 config.go:182] Loaded profile config "default-k8s-diff-port-026691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:38.873024  388420 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:36:38.873181  388420 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-026691"
	I1115 10:36:38.873220  388420 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-026691"
	W1115 10:36:38.873240  388420 addons.go:248] addon storage-provisioner should already be in state true
	I1115 10:36:38.873315  388420 host.go:66] Checking if "default-k8s-diff-port-026691" exists ...
	I1115 10:36:38.873258  388420 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-026691"
	I1115 10:36:38.873640  388420 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-026691"
	W1115 10:36:38.873663  388420 addons.go:248] addon dashboard should already be in state true
	I1115 10:36:38.873444  388420 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-026691"
	I1115 10:36:38.873728  388420 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-026691"
	I1115 10:36:38.873753  388420 host.go:66] Checking if "default-k8s-diff-port-026691" exists ...
	I1115 10:36:38.874091  388420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:36:38.874589  388420 out.go:179] * Verifying Kubernetes components...
	I1115 10:36:38.874818  388420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:36:38.875168  388420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:36:38.876706  388420 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:38.907308  388420 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:36:38.907363  388420 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-026691"
	W1115 10:36:38.907464  388420 addons.go:248] addon default-storageclass should already be in state true
	I1115 10:36:38.907503  388420 host.go:66] Checking if "default-k8s-diff-port-026691" exists ...
	I1115 10:36:38.908043  388420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:36:38.912208  388420 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:36:38.912236  388420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:36:38.912295  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:38.915346  388420 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1115 10:36:38.916793  388420 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1115 10:36:36.250323  387591 addons.go:239] Setting addon default-storageclass=true in "newest-cni-086099"
	W1115 10:36:36.250350  387591 addons.go:248] addon default-storageclass should already be in state true
	I1115 10:36:36.250389  387591 host.go:66] Checking if "newest-cni-086099" exists ...
	I1115 10:36:36.251476  387591 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:36:36.255103  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1115 10:36:36.255128  387591 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1115 10:36:36.255190  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:36.278537  387591 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:36:36.278565  387591 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:36:36.278644  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:36.280814  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:36.281721  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:36.296440  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:36.630526  387591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:36:36.633566  387591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:36:36.636633  387591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:36:36.638099  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1115 10:36:36.638116  387591 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1115 10:36:36.724472  387591 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:36:36.724559  387591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:36:36.729948  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1115 10:36:36.730015  387591 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1115 10:36:36.826253  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1115 10:36:36.826282  387591 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1115 10:36:36.843537  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1115 10:36:36.843560  387591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1115 10:36:36.931895  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1115 10:36:36.931924  387591 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1115 10:36:36.945766  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1115 10:36:36.945791  387591 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1115 10:36:37.023562  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1115 10:36:37.023593  387591 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1115 10:36:37.038918  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1115 10:36:37.038944  387591 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1115 10:36:37.052909  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:36:37.052937  387591 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1115 10:36:37.119950  387591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:36:39.816288  387591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.182684264s)
	I1115 10:36:40.959315  387591 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.234727667s)
	I1115 10:36:40.959363  387591 api_server.go:72] duration metric: took 4.740464162s to wait for apiserver process to appear ...
	I1115 10:36:40.959371  387591 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:36:40.959395  387591 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1115 10:36:40.959325  387591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.322653976s)
	I1115 10:36:40.959440  387591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.839423734s)
	I1115 10:36:40.962518  387591 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-086099 addons enable metrics-server
	
	I1115 10:36:40.964092  387591 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1115 10:36:38.917819  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1115 10:36:38.917851  388420 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1115 10:36:38.917924  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:38.930932  388420 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:36:38.930982  388420 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:36:38.931053  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:38.933702  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:38.939670  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:38.960258  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:39.257807  388420 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:36:39.264707  388420 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:36:39.270235  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1115 10:36:39.270261  388420 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1115 10:36:39.274532  388420 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-026691" to be "Ready" ...
	I1115 10:36:39.351682  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1115 10:36:39.351725  388420 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1115 10:36:39.357989  388420 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:36:39.374984  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1115 10:36:39.375011  388420 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1115 10:36:39.457352  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1115 10:36:39.457377  388420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1115 10:36:39.542591  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1115 10:36:39.542618  388420 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1115 10:36:39.565925  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1115 10:36:39.566041  388420 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1115 10:36:39.580123  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1115 10:36:39.580242  388420 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1115 10:36:39.655102  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1115 10:36:39.655149  388420 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1115 10:36:39.669218  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:36:39.669246  388420 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1115 10:36:39.683183  388420 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:36:40.965416  387591 addons.go:515] duration metric: took 4.746465999s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1115 10:36:40.965454  387591 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:36:40.965477  387591 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:36:41.460167  387591 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1115 10:36:41.465475  387591 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1115 10:36:41.466642  387591 api_server.go:141] control plane version: v1.34.1
	I1115 10:36:41.466668  387591 api_server.go:131] duration metric: took 507.289044ms to wait for apiserver health ...
	I1115 10:36:41.466679  387591 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:36:41.470116  387591 system_pods.go:59] 8 kube-system pods found
	I1115 10:36:41.470165  387591 system_pods.go:61] "coredns-66bc5c9577-rblh2" [903029e0-3b15-43f3-836a-884de528cbc9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1115 10:36:41.470180  387591 system_pods.go:61] "etcd-newest-cni-086099" [6768a007-08a6-47b0-9917-cf54f577829b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:36:41.470190  387591 system_pods.go:61] "kindnet-2h7mm" [1b25f4e6-5f26-42ce-8ceb-56003682c785] Running
	I1115 10:36:41.470200  387591 system_pods.go:61] "kube-apiserver-newest-cni-086099" [3ca22829-f679-44bf-94e5-e4a368e13dcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:36:41.470210  387591 system_pods.go:61] "kube-controller-manager-newest-cni-086099" [1f45f32a-2d9e-49c0-9c69-d2aa59324564] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:36:41.470219  387591 system_pods.go:61] "kube-proxy-6jpzt" [7409c19f-472b-4074-81d0-8e43ac2bc9d4] Running
	I1115 10:36:41.470226  387591 system_pods.go:61] "kube-scheduler-newest-cni-086099" [c3510e0f-9b51-4fb5-bc6e-d0e47be8f5ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:36:41.470235  387591 system_pods.go:61] "storage-provisioner" [23166a3f-bb02-48ca-ab00-721c8c46525d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1115 10:36:41.470247  387591 system_pods.go:74] duration metric: took 3.560608ms to wait for pod list to return data ...
	I1115 10:36:41.470262  387591 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:36:41.472726  387591 default_sa.go:45] found service account: "default"
	I1115 10:36:41.472751  387591 default_sa.go:55] duration metric: took 2.478273ms for default service account to be created ...
	I1115 10:36:41.472765  387591 kubeadm.go:587] duration metric: took 5.253867745s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1115 10:36:41.472786  387591 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:36:41.475250  387591 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:36:41.475273  387591 node_conditions.go:123] node cpu capacity is 8
	I1115 10:36:41.475284  387591 node_conditions.go:105] duration metric: took 2.490696ms to run NodePressure ...
	I1115 10:36:41.475297  387591 start.go:242] waiting for startup goroutines ...
	I1115 10:36:41.475306  387591 start.go:247] waiting for cluster config update ...
	I1115 10:36:41.475322  387591 start.go:256] writing updated cluster config ...
	I1115 10:36:41.475622  387591 ssh_runner.go:195] Run: rm -f paused
	I1115 10:36:41.529383  387591 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:36:41.531753  387591 out.go:179] * Done! kubectl is now configured to use "newest-cni-086099" cluster and "default" namespace by default
	I1115 10:36:42.149798  388420 node_ready.go:49] node "default-k8s-diff-port-026691" is "Ready"
	I1115 10:36:42.149832  388420 node_ready.go:38] duration metric: took 2.87526393s for node "default-k8s-diff-port-026691" to be "Ready" ...
	I1115 10:36:42.149851  388420 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:36:42.149915  388420 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:36:43.654191  388420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.38943226s)
	I1115 10:36:43.654229  388420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.29621492s)
	I1115 10:36:43.654402  388420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.971169317s)
	I1115 10:36:43.654437  388420 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.50449925s)
	I1115 10:36:43.654474  388420 api_server.go:72] duration metric: took 4.78163246s to wait for apiserver process to appear ...
	I1115 10:36:43.654482  388420 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:36:43.654504  388420 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1115 10:36:43.655988  388420 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-026691 addons enable metrics-server
	
	I1115 10:36:43.659469  388420 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:36:43.659501  388420 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:36:43.660788  388420 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	
	
	==> CRI-O <==
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.424659063Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.427045481Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-6jpzt/POD" id=fbfe2f86-7411-4eb6-9014-45430d2a5cfe name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.42716381Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.430097698Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=6b7e8788-d22d-46c1-b18d-777d7b0bc391 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.430973215Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=fbfe2f86-7411-4eb6-9014-45430d2a5cfe name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.436012519Z" level=info msg="Ran pod sandbox 346ea9d5abba6e426c31383eb4ebefb7d0433b38b3adc9de1cf72f2f503f06be with infra container: kube-system/kindnet-2h7mm/POD" id=6b7e8788-d22d-46c1-b18d-777d7b0bc391 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.437463486Z" level=info msg="Ran pod sandbox 6b9a0f486e1136cc2b37740a110565e14407f000c0964255ea2a54da50666733 with infra container: kube-system/kube-proxy-6jpzt/POD" id=fbfe2f86-7411-4eb6-9014-45430d2a5cfe name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.438299823Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=2e888f2d-9940-4131-84c1-bf859332fb59 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.439858297Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=0c4d715c-09aa-4622-8cc2-cf519bebe33b name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.440524624Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=efa0ae6d-549f-49eb-ac2e-b3150c521f08 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.441684348Z" level=info msg="Creating container: kube-system/kindnet-2h7mm/kindnet-cni" id=6ac10152-47aa-40a4-83f5-c4e28de6a00f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.441774145Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.442101092Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=7dd17626-7116-4a93-947c-ed9982c33444 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.444740961Z" level=info msg="Creating container: kube-system/kube-proxy-6jpzt/kube-proxy" id=79d268c9-7d62-444c-994c-bc47147748b2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.445067198Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.446931153Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.447531314Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.452528673Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.514464246Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.534443583Z" level=info msg="Created container b75f03f3e085747d0020a39d05a5df311846f6ea3bfb6996c31eb63e75196968: kube-system/kindnet-2h7mm/kindnet-cni" id=6ac10152-47aa-40a4-83f5-c4e28de6a00f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.535143824Z" level=info msg="Starting container: b75f03f3e085747d0020a39d05a5df311846f6ea3bfb6996c31eb63e75196968" id=c4bfc0c4-0e5b-4087-a616-20fd4972f91d name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.537603641Z" level=info msg="Started container" PID=1159 containerID=b75f03f3e085747d0020a39d05a5df311846f6ea3bfb6996c31eb63e75196968 description=kube-system/kindnet-2h7mm/kindnet-cni id=c4bfc0c4-0e5b-4087-a616-20fd4972f91d name=/runtime.v1.RuntimeService/StartContainer sandboxID=346ea9d5abba6e426c31383eb4ebefb7d0433b38b3adc9de1cf72f2f503f06be
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.539739526Z" level=info msg="Created container fd0dca50b719947cba6525c14321a402534c19e3683222e6c08343aef639fcb8: kube-system/kube-proxy-6jpzt/kube-proxy" id=79d268c9-7d62-444c-994c-bc47147748b2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.540791575Z" level=info msg="Starting container: fd0dca50b719947cba6525c14321a402534c19e3683222e6c08343aef639fcb8" id=9bf6725a-14b6-48db-aad3-635499eac5c2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:36:40 newest-cni-086099 crio[642]: time="2025-11-15T10:36:40.544419368Z" level=info msg="Started container" PID=1160 containerID=fd0dca50b719947cba6525c14321a402534c19e3683222e6c08343aef639fcb8 description=kube-system/kube-proxy-6jpzt/kube-proxy id=9bf6725a-14b6-48db-aad3-635499eac5c2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6b9a0f486e1136cc2b37740a110565e14407f000c0964255ea2a54da50666733
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	fd0dca50b7199       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   6 seconds ago       Running             kube-proxy                1                   6b9a0f486e113       kube-proxy-6jpzt                            kube-system
	b75f03f3e0857       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 seconds ago       Running             kindnet-cni               1                   346ea9d5abba6       kindnet-2h7mm                               kube-system
	38ec6363bcab1       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   11 seconds ago      Running             kube-scheduler            1                   1dc8f60d5b838       kube-scheduler-newest-cni-086099            kube-system
	dcddb7cd9963b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   11 seconds ago      Running             kube-apiserver            1                   e1948784d69e4       kube-apiserver-newest-cni-086099            kube-system
	938d8a7a407d1       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   11 seconds ago      Running             kube-controller-manager   1                   b3233534b4f24       kube-controller-manager-newest-cni-086099   kube-system
	6799daac297c1       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   11 seconds ago      Running             etcd                      1                   4d385f8815122       etcd-newest-cni-086099                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-086099
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-086099
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=newest-cni-086099
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_36_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:36:15 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-086099
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:36:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:36:39 +0000   Sat, 15 Nov 2025 10:36:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:36:39 +0000   Sat, 15 Nov 2025 10:36:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:36:39 +0000   Sat, 15 Nov 2025 10:36:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 15 Nov 2025 10:36:39 +0000   Sat, 15 Nov 2025 10:36:12 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-086099
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                43538429-02c4-40c8-b533-c24bc0895325
	  Boot ID:                    6a33f504-3a8c-4d49-8fc5-301cf5275efc
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-086099                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         28s
	  kube-system                 kindnet-2h7mm                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23s
	  kube-system                 kube-apiserver-newest-cni-086099             250m (3%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-controller-manager-newest-cni-086099    200m (2%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-proxy-6jpzt                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-scheduler-newest-cni-086099             100m (1%)     0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 21s                kube-proxy       
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  35s (x8 over 35s)  kubelet          Node newest-cni-086099 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    35s (x8 over 35s)  kubelet          Node newest-cni-086099 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     35s (x8 over 35s)  kubelet          Node newest-cni-086099 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    28s                kubelet          Node newest-cni-086099 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 28s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  28s                kubelet          Node newest-cni-086099 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     28s                kubelet          Node newest-cni-086099 status is now: NodeHasSufficientPID
	  Normal   Starting                 28s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           24s                node-controller  Node newest-cni-086099 event: Registered Node newest-cni-086099 in Controller
	  Normal   Starting                 11s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  11s (x8 over 11s)  kubelet          Node newest-cni-086099 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11s (x8 over 11s)  kubelet          Node newest-cni-086099 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11s (x8 over 11s)  kubelet          Node newest-cni-086099 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s                 node-controller  Node newest-cni-086099 event: Registered Node newest-cni-086099 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 7f 53 f9 15 93 08 06
	[  +0.017821] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff c6 66 25 e9 45 28 08 06
	[ +19.178294] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 a3 65 b9 28 15 08 06
	[  +0.347929] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 6d df 33 a5 ad 08 06
	[  +0.000319] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff a2 7f 53 f9 15 93 08 06
	[Nov15 10:33] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 96 ed 10 e8 9a 80 08 06
	[  +0.000481] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 a3 65 b9 28 15 08 06
	[ +35.214217] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 c2 9a 56 52 0c 08 06
	[  +9.104720] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 c2 c4 d1 7c dd 08 06
	[Nov15 10:34] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 68 1e 80 53 05 08 06
	[  +0.000444] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 c2 c4 d1 7c dd 08 06
	[ +18.836046] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 98 db 78 4f 0b 08 06
	[  +0.000708] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 c2 9a 56 52 0c 08 06
	
	
	==> etcd [6799daac297c17fbe94a59a4b23a00cc23bdfe7671f3f31f803b074be4ef25a5] <==
	{"level":"warn","ts":"2025-11-15T10:36:38.490076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.495798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.514542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.522242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.529188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.535379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.542198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.553901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.568186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.575504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.584554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.598098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.604221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.610290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.622707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.629143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.635449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.642140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.660843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.669440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.677770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.694462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.701292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.708209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:38.788502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37052","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:36:46 up  2:19,  0 user,  load average: 3.16, 4.13, 2.77
	Linux newest-cni-086099 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b75f03f3e085747d0020a39d05a5df311846f6ea3bfb6996c31eb63e75196968] <==
	I1115 10:36:40.816911       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:36:40.817211       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1115 10:36:40.817433       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:36:40.817492       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:36:40.817547       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:36:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:36:40.969746       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:36:41.057277       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:36:41.057326       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:36:41.058121       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [dcddb7cd9963badc91f7602efc0c7dd3a6aa66928df5288774cb992a0f211b2c] <==
	I1115 10:36:39.520614       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1115 10:36:39.521165       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1115 10:36:39.521277       1 aggregator.go:171] initial CRD sync complete...
	I1115 10:36:39.521330       1 autoregister_controller.go:144] Starting autoregister controller
	I1115 10:36:39.521355       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 10:36:39.521378       1 cache.go:39] Caches are synced for autoregister controller
	I1115 10:36:39.524855       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1115 10:36:39.525244       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1115 10:36:39.532306       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1115 10:36:39.532342       1 policy_source.go:240] refreshing policies
	I1115 10:36:39.534468       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1115 10:36:39.536735       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1115 10:36:39.536809       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 10:36:39.616742       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 10:36:40.255985       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 10:36:40.419571       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:36:40.425466       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 10:36:40.529845       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 10:36:40.627415       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:36:40.636605       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:36:40.751232       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.120.70"}
	I1115 10:36:40.818272       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.36.81"}
	I1115 10:36:43.136817       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:36:43.285881       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 10:36:43.336346       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [938d8a7a407d113893b3543524a1f018292ab3c06ad38c37347f5c09b4f19aed] <==
	I1115 10:36:42.933064       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 10:36:42.933090       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1115 10:36:42.933230       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1115 10:36:42.934314       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 10:36:42.934334       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1115 10:36:42.934360       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1115 10:36:42.934453       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1115 10:36:42.935591       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 10:36:42.935687       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1115 10:36:42.936858       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1115 10:36:42.937997       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1115 10:36:42.938073       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1115 10:36:42.938131       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:36:42.939145       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 10:36:42.940719       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1115 10:36:42.946916       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1115 10:36:42.948585       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-086099"
	I1115 10:36:42.948666       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1115 10:36:42.948826       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1115 10:36:42.952363       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 10:36:42.955682       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:36:42.965887       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:36:42.984036       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:36:42.984057       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 10:36:42.984065       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [fd0dca50b719947cba6525c14321a402534c19e3683222e6c08343aef639fcb8] <==
	I1115 10:36:40.719802       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:36:40.848650       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:36:40.949095       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:36:40.949144       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1115 10:36:40.949277       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:36:41.018176       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:36:41.018231       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:36:41.024472       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:36:41.024887       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:36:41.024910       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:36:41.026438       1 config.go:200] "Starting service config controller"
	I1115 10:36:41.026463       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:36:41.026536       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:36:41.026557       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:36:41.026578       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:36:41.026593       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:36:41.026613       1 config.go:309] "Starting node config controller"
	I1115 10:36:41.026625       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:36:41.127470       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 10:36:41.127488       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 10:36:41.127517       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:36:41.127528       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [38ec6363bcab12e98490c10ae199bee3e96e76c3b252eb09eaacbceb79f4fee3] <==
	I1115 10:36:37.058787       1 serving.go:386] Generated self-signed cert in-memory
	I1115 10:36:39.548992       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1115 10:36:39.549025       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:36:39.620478       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:36:39.620490       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1115 10:36:39.620530       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:36:39.620537       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1115 10:36:39.620590       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:36:39.620602       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:36:39.620867       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 10:36:39.620978       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 10:36:39.721065       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:36:39.721080       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:36:39.721114       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Nov 15 10:36:37 newest-cni-086099 kubelet[789]: E1115 10:36:37.153946     789 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-086099\" not found" node="newest-cni-086099"
	Nov 15 10:36:38 newest-cni-086099 kubelet[789]: E1115 10:36:38.155869     789 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-086099\" not found" node="newest-cni-086099"
	Nov 15 10:36:38 newest-cni-086099 kubelet[789]: E1115 10:36:38.155941     789 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-086099\" not found" node="newest-cni-086099"
	Nov 15 10:36:39 newest-cni-086099 kubelet[789]: I1115 10:36:39.514532     789 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-086099"
	Nov 15 10:36:39 newest-cni-086099 kubelet[789]: I1115 10:36:39.615147     789 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-086099"
	Nov 15 10:36:39 newest-cni-086099 kubelet[789]: I1115 10:36:39.615366     789 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-086099"
	Nov 15 10:36:39 newest-cni-086099 kubelet[789]: I1115 10:36:39.615447     789 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 15 10:36:39 newest-cni-086099 kubelet[789]: I1115 10:36:39.617054     789 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 15 10:36:39 newest-cni-086099 kubelet[789]: E1115 10:36:39.634831     789 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-086099\" already exists" pod="kube-system/kube-controller-manager-newest-cni-086099"
	Nov 15 10:36:39 newest-cni-086099 kubelet[789]: I1115 10:36:39.634880     789 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-086099"
	Nov 15 10:36:39 newest-cni-086099 kubelet[789]: E1115 10:36:39.715821     789 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-086099\" already exists" pod="kube-system/kube-scheduler-newest-cni-086099"
	Nov 15 10:36:39 newest-cni-086099 kubelet[789]: I1115 10:36:39.715868     789 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-086099"
	Nov 15 10:36:39 newest-cni-086099 kubelet[789]: E1115 10:36:39.724836     789 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-086099\" already exists" pod="kube-system/etcd-newest-cni-086099"
	Nov 15 10:36:39 newest-cni-086099 kubelet[789]: I1115 10:36:39.724879     789 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-086099"
	Nov 15 10:36:39 newest-cni-086099 kubelet[789]: E1115 10:36:39.731796     789 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-086099\" already exists" pod="kube-system/kube-apiserver-newest-cni-086099"
	Nov 15 10:36:40 newest-cni-086099 kubelet[789]: I1115 10:36:40.116668     789 apiserver.go:52] "Watching apiserver"
	Nov 15 10:36:40 newest-cni-086099 kubelet[789]: I1115 10:36:40.170342     789 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 15 10:36:40 newest-cni-086099 kubelet[789]: I1115 10:36:40.243497     789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7409c19f-472b-4074-81d0-8e43ac2bc9d4-xtables-lock\") pod \"kube-proxy-6jpzt\" (UID: \"7409c19f-472b-4074-81d0-8e43ac2bc9d4\") " pod="kube-system/kube-proxy-6jpzt"
	Nov 15 10:36:40 newest-cni-086099 kubelet[789]: I1115 10:36:40.243553     789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1b25f4e6-5f26-42ce-8ceb-56003682c785-cni-cfg\") pod \"kindnet-2h7mm\" (UID: \"1b25f4e6-5f26-42ce-8ceb-56003682c785\") " pod="kube-system/kindnet-2h7mm"
	Nov 15 10:36:40 newest-cni-086099 kubelet[789]: I1115 10:36:40.243582     789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b25f4e6-5f26-42ce-8ceb-56003682c785-lib-modules\") pod \"kindnet-2h7mm\" (UID: \"1b25f4e6-5f26-42ce-8ceb-56003682c785\") " pod="kube-system/kindnet-2h7mm"
	Nov 15 10:36:40 newest-cni-086099 kubelet[789]: I1115 10:36:40.243631     789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b25f4e6-5f26-42ce-8ceb-56003682c785-xtables-lock\") pod \"kindnet-2h7mm\" (UID: \"1b25f4e6-5f26-42ce-8ceb-56003682c785\") " pod="kube-system/kindnet-2h7mm"
	Nov 15 10:36:40 newest-cni-086099 kubelet[789]: I1115 10:36:40.243657     789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7409c19f-472b-4074-81d0-8e43ac2bc9d4-lib-modules\") pod \"kube-proxy-6jpzt\" (UID: \"7409c19f-472b-4074-81d0-8e43ac2bc9d4\") " pod="kube-system/kube-proxy-6jpzt"
	Nov 15 10:36:42 newest-cni-086099 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 10:36:42 newest-cni-086099 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 10:36:42 newest-cni-086099 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-086099 -n newest-cni-086099
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-086099 -n newest-cni-086099: exit status 2 (355.429161ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-086099 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-rblh2 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-lwxxq kubernetes-dashboard-855c9754f9-r2tgx
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-086099 describe pod coredns-66bc5c9577-rblh2 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-lwxxq kubernetes-dashboard-855c9754f9-r2tgx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-086099 describe pod coredns-66bc5c9577-rblh2 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-lwxxq kubernetes-dashboard-855c9754f9-r2tgx: exit status 1 (73.735065ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-rblh2" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-lwxxq" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-r2tgx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-086099 describe pod coredns-66bc5c9577-rblh2 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-lwxxq kubernetes-dashboard-855c9754f9-r2tgx: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-719574 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-719574 --alsologtostderr -v=1: exit status 80 (2.521188668s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-719574 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:36:50.547452  394325 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:36:50.547697  394325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:36:50.547705  394325 out.go:374] Setting ErrFile to fd 2...
	I1115 10:36:50.547709  394325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:36:50.547917  394325 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 10:36:50.548165  394325 out.go:368] Setting JSON to false
	I1115 10:36:50.548198  394325 mustload.go:66] Loading cluster: embed-certs-719574
	I1115 10:36:50.549716  394325 config.go:182] Loaded profile config "embed-certs-719574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:50.550178  394325 cli_runner.go:164] Run: docker container inspect embed-certs-719574 --format={{.State.Status}}
	I1115 10:36:50.568738  394325 host.go:66] Checking if "embed-certs-719574" exists ...
	I1115 10:36:50.569029  394325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:36:50.647232  394325 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:66 SystemTime:2025-11-15 10:36:50.636999223 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:36:50.647979  394325 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-719574 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1115 10:36:50.650239  394325 out.go:179] * Pausing node embed-certs-719574 ... 
	I1115 10:36:50.651320  394325 host.go:66] Checking if "embed-certs-719574" exists ...
	I1115 10:36:50.651597  394325 ssh_runner.go:195] Run: systemctl --version
	I1115 10:36:50.651644  394325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-719574
	I1115 10:36:50.669811  394325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/embed-certs-719574/id_rsa Username:docker}
	I1115 10:36:50.762782  394325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:36:50.793702  394325 pause.go:52] kubelet running: true
	I1115 10:36:50.793771  394325 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:36:50.957426  394325 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:36:50.957510  394325 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:36:51.024208  394325 cri.go:89] found id: "fb08cb8a4d59b7ee225bd83ec883701f5430ec14c7bf4ecd1bbfd4dc422ad397"
	I1115 10:36:51.024230  394325 cri.go:89] found id: "2b7fc8178ede99bd2bac3d421353e5930c25042c3ada59734b1a0b0847235087"
	I1115 10:36:51.024234  394325 cri.go:89] found id: "10b6f8a418fda8c0a43012d2666abd1f951f96d5c6a2315faf72d61ccd752d2f"
	I1115 10:36:51.024239  394325 cri.go:89] found id: "f676bcf138c32c1e2f79a1401bcec6579bb0e86468d1bbfa5fa8782637358ec9"
	I1115 10:36:51.024242  394325 cri.go:89] found id: "7fc2fdf9c30a8cc273f623029958f218943ba5c78f1f3342ad2488e439a98294"
	I1115 10:36:51.024252  394325 cri.go:89] found id: "34a183c86eaa1af613ae4708887513840319366cbb4179e7f1678698297eade2"
	I1115 10:36:51.024263  394325 cri.go:89] found id: "a04037d02e2f108f2bc0bc86b223e37c201372e3009a0b55923609e9c3f5b7b5"
	I1115 10:36:51.024265  394325 cri.go:89] found id: "d2523d5b7384ac5c1a3b025b86aa00148daa68b776dfada0c3e9b0dd751d444c"
	I1115 10:36:51.024268  394325 cri.go:89] found id: "56627175c47b813bf4460ca13a901ec3152cbb4d22f0362e40133db8b19b3e87"
	I1115 10:36:51.024281  394325 cri.go:89] found id: "3ead158324196d73b353abf0dbc9c77d451a7d90a3b992875227cdf4ff71cadf"
	I1115 10:36:51.024284  394325 cri.go:89] found id: "47d1f78e1495806bd1e6bc6db543ba9f35ac22c92ea194e3742c7b61fc3a2da3"
	I1115 10:36:51.024286  394325 cri.go:89] found id: "6587299f75a35df796ecc6b64e4a4ce75d90dc27bf9c4ef271dc17d17c347b48"
	I1115 10:36:51.024289  394325 cri.go:89] found id: ""
	I1115 10:36:51.024326  394325 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:36:51.035864  394325 retry.go:31] will retry after 225.501324ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:36:51Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:36:51.262321  394325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:36:51.275547  394325 pause.go:52] kubelet running: false
	I1115 10:36:51.275630  394325 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:36:51.402290  394325 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:36:51.402363  394325 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:36:51.475871  394325 cri.go:89] found id: "fb08cb8a4d59b7ee225bd83ec883701f5430ec14c7bf4ecd1bbfd4dc422ad397"
	I1115 10:36:51.475899  394325 cri.go:89] found id: "2b7fc8178ede99bd2bac3d421353e5930c25042c3ada59734b1a0b0847235087"
	I1115 10:36:51.475905  394325 cri.go:89] found id: "10b6f8a418fda8c0a43012d2666abd1f951f96d5c6a2315faf72d61ccd752d2f"
	I1115 10:36:51.475910  394325 cri.go:89] found id: "f676bcf138c32c1e2f79a1401bcec6579bb0e86468d1bbfa5fa8782637358ec9"
	I1115 10:36:51.475914  394325 cri.go:89] found id: "7fc2fdf9c30a8cc273f623029958f218943ba5c78f1f3342ad2488e439a98294"
	I1115 10:36:51.475918  394325 cri.go:89] found id: "34a183c86eaa1af613ae4708887513840319366cbb4179e7f1678698297eade2"
	I1115 10:36:51.475922  394325 cri.go:89] found id: "a04037d02e2f108f2bc0bc86b223e37c201372e3009a0b55923609e9c3f5b7b5"
	I1115 10:36:51.475926  394325 cri.go:89] found id: "d2523d5b7384ac5c1a3b025b86aa00148daa68b776dfada0c3e9b0dd751d444c"
	I1115 10:36:51.475930  394325 cri.go:89] found id: "56627175c47b813bf4460ca13a901ec3152cbb4d22f0362e40133db8b19b3e87"
	I1115 10:36:51.475939  394325 cri.go:89] found id: "3ead158324196d73b353abf0dbc9c77d451a7d90a3b992875227cdf4ff71cadf"
	I1115 10:36:51.475943  394325 cri.go:89] found id: "47d1f78e1495806bd1e6bc6db543ba9f35ac22c92ea194e3742c7b61fc3a2da3"
	I1115 10:36:51.475947  394325 cri.go:89] found id: "6587299f75a35df796ecc6b64e4a4ce75d90dc27bf9c4ef271dc17d17c347b48"
	I1115 10:36:51.475967  394325 cri.go:89] found id: ""
	I1115 10:36:51.476013  394325 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:36:51.491217  394325 retry.go:31] will retry after 536.317514ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:36:51Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:36:52.027982  394325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:36:52.041202  394325 pause.go:52] kubelet running: false
	I1115 10:36:52.041268  394325 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:36:52.198857  394325 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:36:52.199102  394325 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:36:52.275140  394325 cri.go:89] found id: "fb08cb8a4d59b7ee225bd83ec883701f5430ec14c7bf4ecd1bbfd4dc422ad397"
	I1115 10:36:52.275168  394325 cri.go:89] found id: "2b7fc8178ede99bd2bac3d421353e5930c25042c3ada59734b1a0b0847235087"
	I1115 10:36:52.275175  394325 cri.go:89] found id: "10b6f8a418fda8c0a43012d2666abd1f951f96d5c6a2315faf72d61ccd752d2f"
	I1115 10:36:52.275180  394325 cri.go:89] found id: "f676bcf138c32c1e2f79a1401bcec6579bb0e86468d1bbfa5fa8782637358ec9"
	I1115 10:36:52.275184  394325 cri.go:89] found id: "7fc2fdf9c30a8cc273f623029958f218943ba5c78f1f3342ad2488e439a98294"
	I1115 10:36:52.275189  394325 cri.go:89] found id: "34a183c86eaa1af613ae4708887513840319366cbb4179e7f1678698297eade2"
	I1115 10:36:52.275193  394325 cri.go:89] found id: "a04037d02e2f108f2bc0bc86b223e37c201372e3009a0b55923609e9c3f5b7b5"
	I1115 10:36:52.275197  394325 cri.go:89] found id: "d2523d5b7384ac5c1a3b025b86aa00148daa68b776dfada0c3e9b0dd751d444c"
	I1115 10:36:52.275201  394325 cri.go:89] found id: "56627175c47b813bf4460ca13a901ec3152cbb4d22f0362e40133db8b19b3e87"
	I1115 10:36:52.275260  394325 cri.go:89] found id: "3ead158324196d73b353abf0dbc9c77d451a7d90a3b992875227cdf4ff71cadf"
	I1115 10:36:52.275270  394325 cri.go:89] found id: "47d1f78e1495806bd1e6bc6db543ba9f35ac22c92ea194e3742c7b61fc3a2da3"
	I1115 10:36:52.275274  394325 cri.go:89] found id: "6587299f75a35df796ecc6b64e4a4ce75d90dc27bf9c4ef271dc17d17c347b48"
	I1115 10:36:52.275279  394325 cri.go:89] found id: ""
	I1115 10:36:52.275329  394325 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:36:52.290077  394325 retry.go:31] will retry after 433.867119ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:36:52Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:36:52.724837  394325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:36:52.737892  394325 pause.go:52] kubelet running: false
	I1115 10:36:52.737946  394325 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:36:52.905532  394325 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:36:52.905630  394325 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:36:52.981649  394325 cri.go:89] found id: "fb08cb8a4d59b7ee225bd83ec883701f5430ec14c7bf4ecd1bbfd4dc422ad397"
	I1115 10:36:52.981680  394325 cri.go:89] found id: "2b7fc8178ede99bd2bac3d421353e5930c25042c3ada59734b1a0b0847235087"
	I1115 10:36:52.981687  394325 cri.go:89] found id: "10b6f8a418fda8c0a43012d2666abd1f951f96d5c6a2315faf72d61ccd752d2f"
	I1115 10:36:52.981692  394325 cri.go:89] found id: "f676bcf138c32c1e2f79a1401bcec6579bb0e86468d1bbfa5fa8782637358ec9"
	I1115 10:36:52.981696  394325 cri.go:89] found id: "7fc2fdf9c30a8cc273f623029958f218943ba5c78f1f3342ad2488e439a98294"
	I1115 10:36:52.981701  394325 cri.go:89] found id: "34a183c86eaa1af613ae4708887513840319366cbb4179e7f1678698297eade2"
	I1115 10:36:52.981705  394325 cri.go:89] found id: "a04037d02e2f108f2bc0bc86b223e37c201372e3009a0b55923609e9c3f5b7b5"
	I1115 10:36:52.981709  394325 cri.go:89] found id: "d2523d5b7384ac5c1a3b025b86aa00148daa68b776dfada0c3e9b0dd751d444c"
	I1115 10:36:52.981713  394325 cri.go:89] found id: "56627175c47b813bf4460ca13a901ec3152cbb4d22f0362e40133db8b19b3e87"
	I1115 10:36:52.981727  394325 cri.go:89] found id: "3ead158324196d73b353abf0dbc9c77d451a7d90a3b992875227cdf4ff71cadf"
	I1115 10:36:52.981731  394325 cri.go:89] found id: "47d1f78e1495806bd1e6bc6db543ba9f35ac22c92ea194e3742c7b61fc3a2da3"
	I1115 10:36:52.981735  394325 cri.go:89] found id: "6587299f75a35df796ecc6b64e4a4ce75d90dc27bf9c4ef271dc17d17c347b48"
	I1115 10:36:52.981738  394325 cri.go:89] found id: ""
	I1115 10:36:52.981786  394325 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:36:52.998505  394325 out.go:203] 
	W1115 10:36:52.999938  394325 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:36:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:36:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 10:36:52.999992  394325 out.go:285] * 
	* 
	W1115 10:36:53.007089  394325 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 10:36:53.008437  394325 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-719574 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-719574
helpers_test.go:243: (dbg) docker inspect embed-certs-719574:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "77b854d73395a6882ad846c6f51d5c84a63e9af66dc48bfe4bdf582432e5fa0b",
	        "Created": "2025-11-15T10:34:39.190268884Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 377946,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:35:49.857301499Z",
	            "FinishedAt": "2025-11-15T10:35:48.927246994Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/77b854d73395a6882ad846c6f51d5c84a63e9af66dc48bfe4bdf582432e5fa0b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/77b854d73395a6882ad846c6f51d5c84a63e9af66dc48bfe4bdf582432e5fa0b/hostname",
	        "HostsPath": "/var/lib/docker/containers/77b854d73395a6882ad846c6f51d5c84a63e9af66dc48bfe4bdf582432e5fa0b/hosts",
	        "LogPath": "/var/lib/docker/containers/77b854d73395a6882ad846c6f51d5c84a63e9af66dc48bfe4bdf582432e5fa0b/77b854d73395a6882ad846c6f51d5c84a63e9af66dc48bfe4bdf582432e5fa0b-json.log",
	        "Name": "/embed-certs-719574",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-719574:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-719574",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "77b854d73395a6882ad846c6f51d5c84a63e9af66dc48bfe4bdf582432e5fa0b",
	                "LowerDir": "/var/lib/docker/overlay2/d78394fc0232db571aad8803b55daf0d7b8a72cdb47dfbbeba9dc3f9e8f3bfaa-init/diff:/var/lib/docker/overlay2/507c85b6fb43d9c98216fac79d7a8d08dd20b2a63d0fbfc758336e5d38c04044/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d78394fc0232db571aad8803b55daf0d7b8a72cdb47dfbbeba9dc3f9e8f3bfaa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d78394fc0232db571aad8803b55daf0d7b8a72cdb47dfbbeba9dc3f9e8f3bfaa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d78394fc0232db571aad8803b55daf0d7b8a72cdb47dfbbeba9dc3f9e8f3bfaa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-719574",
	                "Source": "/var/lib/docker/volumes/embed-certs-719574/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-719574",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-719574",
	                "name.minikube.sigs.k8s.io": "embed-certs-719574",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "86a9edf64b01af8ecc8bab1479a6ea391424ffc25cb059062dd31baa205f6d3e",
	            "SandboxKey": "/var/run/docker/netns/86a9edf64b01",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-719574": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5402d8c1e78ae31835e502183d61451b5187ae582db12fcffbcfeece1b73ea7c",
	                    "EndpointID": "9d50f5e25f46c5221d1abd247692d7fb156d6bf660627e3691a0eedd0bab993d",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "aa:f7:cd:af:61:5f",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-719574",
	                        "77b854d73395"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-719574 -n embed-certs-719574
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-719574 -n embed-certs-719574: exit status 2 (369.868737ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-719574 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-719574 logs -n 25: (1.245626529s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ old-k8s-version-087235 image list --format=json                                                                                                                                                                                               │ old-k8s-version-087235       │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ pause   │ -p old-k8s-version-087235 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-087235       │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ delete  │ -p old-k8s-version-087235                                                                                                                                                                                                                     │ old-k8s-version-087235       │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ addons  │ enable dashboard -p embed-certs-719574 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ start   │ -p embed-certs-719574 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:36 UTC │
	│ delete  │ -p old-k8s-version-087235                                                                                                                                                                                                                     │ old-k8s-version-087235       │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ start   │ -p newest-cni-086099 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:36 UTC │
	│ image   │ no-preload-283677 image list --format=json                                                                                                                                                                                                    │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ pause   │ -p no-preload-283677 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ delete  │ -p no-preload-283677                                                                                                                                                                                                                          │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ delete  │ -p no-preload-283677                                                                                                                                                                                                                          │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-026691 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-026691 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-026691 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-026691 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ addons  │ enable metrics-server -p newest-cni-086099 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ stop    │ -p newest-cni-086099 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ addons  │ enable dashboard -p newest-cni-086099 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ start   │ -p newest-cni-086099 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-026691 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-026691 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ start   │ -p default-k8s-diff-port-026691 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-026691 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ image   │ newest-cni-086099 image list --format=json                                                                                                                                                                                                    │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ pause   │ -p newest-cni-086099 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ delete  │ -p newest-cni-086099                                                                                                                                                                                                                          │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ delete  │ -p newest-cni-086099                                                                                                                                                                                                                          │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ image   │ embed-certs-719574 image list --format=json                                                                                                                                                                                                   │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ pause   │ -p embed-certs-719574 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:36:31
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:36:31.193182  388420 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:36:31.193281  388420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:36:31.193289  388420 out.go:374] Setting ErrFile to fd 2...
	I1115 10:36:31.193293  388420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:36:31.193515  388420 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 10:36:31.193933  388420 out.go:368] Setting JSON to false
	I1115 10:36:31.195111  388420 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8328,"bootTime":1763194663,"procs":266,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 10:36:31.195216  388420 start.go:143] virtualization: kvm guest
	I1115 10:36:31.196894  388420 out.go:179] * [default-k8s-diff-port-026691] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 10:36:31.198076  388420 notify.go:221] Checking for updates...
	I1115 10:36:31.198087  388420 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 10:36:31.199249  388420 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:36:31.200471  388420 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:36:31.201512  388420 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-55448/.minikube
	I1115 10:36:31.202449  388420 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 10:36:31.203634  388420 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:36:31.205205  388420 config.go:182] Loaded profile config "default-k8s-diff-port-026691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:31.205718  388420 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:36:31.228892  388420 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 10:36:31.229044  388420 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:36:31.285898  388420 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:67 SystemTime:2025-11-15 10:36:31.276283811 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:36:31.286032  388420 docker.go:319] overlay module found
	I1115 10:36:31.287655  388420 out.go:179] * Using the docker driver based on existing profile
	I1115 10:36:31.288859  388420 start.go:309] selected driver: docker
	I1115 10:36:31.288877  388420 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-026691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-026691 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:36:31.288972  388420 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:36:31.289812  388420 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:36:31.352009  388420 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:66 SystemTime:2025-11-15 10:36:31.342104199 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:36:31.352371  388420 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:36:31.352408  388420 cni.go:84] Creating CNI manager for ""
	I1115 10:36:31.352457  388420 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:36:31.352498  388420 start.go:353] cluster config:
	{Name:default-k8s-diff-port-026691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-026691 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:36:31.354418  388420 out.go:179] * Starting "default-k8s-diff-port-026691" primary control-plane node in "default-k8s-diff-port-026691" cluster
	I1115 10:36:31.355595  388420 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:36:31.356825  388420 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:36:31.357856  388420 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:36:31.357890  388420 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1115 10:36:31.357905  388420 cache.go:65] Caching tarball of preloaded images
	I1115 10:36:31.357944  388420 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:36:31.358020  388420 preload.go:238] Found /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 10:36:31.358036  388420 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:36:31.358136  388420 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/config.json ...
	I1115 10:36:31.378843  388420 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:36:31.378864  388420 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:36:31.378881  388420 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:36:31.378904  388420 start.go:360] acquireMachinesLock for default-k8s-diff-port-026691: {Name:mk1f3196dd9a24a043fa707553211d0b0ea8c1f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:36:31.378986  388420 start.go:364] duration metric: took 61.257µs to acquireMachinesLock for "default-k8s-diff-port-026691"
	I1115 10:36:31.379010  388420 start.go:96] Skipping create...Using existing machine configuration
	I1115 10:36:31.379018  388420 fix.go:54] fixHost starting: 
	I1115 10:36:31.379252  388420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:36:31.397025  388420 fix.go:112] recreateIfNeeded on default-k8s-diff-port-026691: state=Stopped err=<nil>
	W1115 10:36:31.397068  388420 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 10:36:29.135135  387591 out.go:252] * Restarting existing docker container for "newest-cni-086099" ...
	I1115 10:36:29.135222  387591 cli_runner.go:164] Run: docker start newest-cni-086099
	I1115 10:36:29.412428  387591 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:36:29.431258  387591 kic.go:430] container "newest-cni-086099" state is running.
	I1115 10:36:29.431760  387591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-086099
	I1115 10:36:29.450271  387591 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/config.json ...
	I1115 10:36:29.450487  387591 machine.go:94] provisionDockerMachine start ...
	I1115 10:36:29.450542  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:29.468796  387591 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:29.469141  387591 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1115 10:36:29.469158  387591 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:36:29.469768  387591 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43374->127.0.0.1:33129: read: connection reset by peer
	I1115 10:36:32.597021  387591 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-086099
	
	I1115 10:36:32.597063  387591 ubuntu.go:182] provisioning hostname "newest-cni-086099"
	I1115 10:36:32.597140  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:32.616934  387591 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:32.617209  387591 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1115 10:36:32.617233  387591 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-086099 && echo "newest-cni-086099" | sudo tee /etc/hostname
	I1115 10:36:32.756237  387591 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-086099
	
	I1115 10:36:32.756329  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:32.775168  387591 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:32.775389  387591 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1115 10:36:32.775405  387591 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-086099' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-086099/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-086099' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:36:32.902668  387591 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:36:32.902701  387591 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-55448/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-55448/.minikube}
	I1115 10:36:32.902736  387591 ubuntu.go:190] setting up certificates
	I1115 10:36:32.902754  387591 provision.go:84] configureAuth start
	I1115 10:36:32.902811  387591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-086099
	I1115 10:36:32.921923  387591 provision.go:143] copyHostCerts
	I1115 10:36:32.922017  387591 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem, removing ...
	I1115 10:36:32.922035  387591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem
	I1115 10:36:32.922102  387591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem (1082 bytes)
	I1115 10:36:32.922216  387591 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem, removing ...
	I1115 10:36:32.922225  387591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem
	I1115 10:36:32.922253  387591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem (1123 bytes)
	I1115 10:36:32.922341  387591 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem, removing ...
	I1115 10:36:32.922348  387591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem
	I1115 10:36:32.922372  387591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem (1679 bytes)
	I1115 10:36:32.922421  387591 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem org=jenkins.newest-cni-086099 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-086099]
	I1115 10:36:32.940854  387591 provision.go:177] copyRemoteCerts
	I1115 10:36:32.940914  387591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:36:32.940948  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:32.958931  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:33.053731  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:36:33.071243  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:36:33.088651  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1115 10:36:33.105219  387591 provision.go:87] duration metric: took 202.453369ms to configureAuth
	I1115 10:36:33.105244  387591 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:36:33.105414  387591 config.go:182] Loaded profile config "newest-cni-086099": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:33.105509  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:33.123012  387591 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:33.123259  387591 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1115 10:36:33.123277  387591 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:36:33.389799  387591 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:36:33.389822  387591 machine.go:97] duration metric: took 3.93932207s to provisionDockerMachine
	I1115 10:36:33.389835  387591 start.go:293] postStartSetup for "newest-cni-086099" (driver="docker")
	I1115 10:36:33.389844  387591 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:36:33.389903  387591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:36:33.389946  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:33.409403  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:33.503330  387591 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:36:33.506790  387591 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:36:33.506815  387591 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:36:33.506825  387591 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/addons for local assets ...
	I1115 10:36:33.506878  387591 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/files for local assets ...
	I1115 10:36:33.506995  387591 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem -> 589622.pem in /etc/ssl/certs
	I1115 10:36:33.507126  387591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:36:33.514570  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:36:33.531880  387591 start.go:296] duration metric: took 142.028023ms for postStartSetup
	I1115 10:36:33.532012  387591 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:36:33.532066  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:33.549908  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:33.640348  387591 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:36:33.645124  387591 fix.go:56] duration metric: took 4.529931109s for fixHost
	I1115 10:36:33.645164  387591 start.go:83] releasing machines lock for "newest-cni-086099", held for 4.529982501s
	I1115 10:36:33.645246  387591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-086099
	I1115 10:36:33.663364  387591 ssh_runner.go:195] Run: cat /version.json
	I1115 10:36:33.663400  387591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:36:33.663445  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:33.663461  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:33.682200  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:33.682521  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:33.827221  387591 ssh_runner.go:195] Run: systemctl --version
	I1115 10:36:33.834019  387591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:36:33.868151  387591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:36:33.872995  387591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:36:33.873067  387591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:36:33.881540  387591 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 10:36:33.881563  387591 start.go:496] detecting cgroup driver to use...
	I1115 10:36:33.881595  387591 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:36:33.881628  387591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:36:33.895704  387591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:36:33.907633  387591 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:36:33.907681  387591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:36:33.921408  387591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	W1115 10:36:30.745845  377744 pod_ready.go:104] pod "coredns-66bc5c9577-fjzk5" is not "Ready", error: <nil>
	W1115 10:36:32.746544  377744 pod_ready.go:104] pod "coredns-66bc5c9577-fjzk5" is not "Ready", error: <nil>
	I1115 10:36:33.933689  387591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:36:34.015025  387591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:36:34.097166  387591 docker.go:234] disabling docker service ...
	I1115 10:36:34.097250  387591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:36:34.111501  387591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:36:34.123898  387591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:36:34.208076  387591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:36:34.289077  387591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:36:34.302010  387591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:36:34.316333  387591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:36:34.316409  387591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.325113  387591 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:36:34.325175  387591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.333844  387591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.342343  387591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.350817  387591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:36:34.359269  387591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.368008  387591 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.376100  387591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.384822  387591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:36:34.392091  387591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:36:34.399149  387591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:34.478616  387591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:36:34.580323  387591 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:36:34.580408  387591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:36:34.584509  387591 start.go:564] Will wait 60s for crictl version
	I1115 10:36:34.584568  387591 ssh_runner.go:195] Run: which crictl
	I1115 10:36:34.588078  387591 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:36:34.613070  387591 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:36:34.613150  387591 ssh_runner.go:195] Run: crio --version
	I1115 10:36:34.641080  387591 ssh_runner.go:195] Run: crio --version
	I1115 10:36:34.670335  387591 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:36:34.671690  387591 cli_runner.go:164] Run: docker network inspect newest-cni-086099 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:36:34.689678  387591 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1115 10:36:34.693973  387591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:36:34.705342  387591 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1115 10:36:31.398937  388420 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-026691" ...
	I1115 10:36:31.399016  388420 cli_runner.go:164] Run: docker start default-k8s-diff-port-026691
	I1115 10:36:31.676189  388420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:36:31.694382  388420 kic.go:430] container "default-k8s-diff-port-026691" state is running.
	I1115 10:36:31.694751  388420 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-026691
	I1115 10:36:31.713425  388420 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/config.json ...
	I1115 10:36:31.713652  388420 machine.go:94] provisionDockerMachine start ...
	I1115 10:36:31.713746  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:31.732991  388420 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:31.733252  388420 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1115 10:36:31.733277  388420 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:36:31.734038  388420 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45950->127.0.0.1:33134: read: connection reset by peer
	I1115 10:36:34.867843  388420 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-026691
	
	I1115 10:36:34.867883  388420 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-026691"
	I1115 10:36:34.868072  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:34.887800  388420 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:34.888079  388420 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1115 10:36:34.888098  388420 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-026691 && echo "default-k8s-diff-port-026691" | sudo tee /etc/hostname
	I1115 10:36:35.027312  388420 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-026691
	
	I1115 10:36:35.027402  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:35.049307  388420 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:35.049620  388420 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1115 10:36:35.049653  388420 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-026691' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-026691/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-026691' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:36:35.185792  388420 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:36:35.185824  388420 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-55448/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-55448/.minikube}
	I1115 10:36:35.185877  388420 ubuntu.go:190] setting up certificates
	I1115 10:36:35.185889  388420 provision.go:84] configureAuth start
	I1115 10:36:35.185975  388420 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-026691
	I1115 10:36:35.205215  388420 provision.go:143] copyHostCerts
	I1115 10:36:35.205302  388420 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem, removing ...
	I1115 10:36:35.205325  388420 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem
	I1115 10:36:35.205419  388420 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem (1082 bytes)
	I1115 10:36:35.205578  388420 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem, removing ...
	I1115 10:36:35.205600  388420 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem
	I1115 10:36:35.205648  388420 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem (1123 bytes)
	I1115 10:36:35.205811  388420 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem, removing ...
	I1115 10:36:35.205831  388420 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem
	I1115 10:36:35.205877  388420 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem (1679 bytes)
	I1115 10:36:35.205988  388420 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-026691 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-026691 localhost minikube]
	I1115 10:36:35.356382  388420 provision.go:177] copyRemoteCerts
	I1115 10:36:35.356441  388420 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:36:35.356476  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:35.375752  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:35.470476  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:36:35.488150  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1115 10:36:35.505264  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:36:35.522854  388420 provision.go:87] duration metric: took 336.947608ms to configureAuth
	I1115 10:36:35.522880  388420 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:36:35.523120  388420 config.go:182] Loaded profile config "default-k8s-diff-port-026691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:35.523282  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:35.543167  388420 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:35.543480  388420 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1115 10:36:35.543509  388420 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:36:35.848476  388420 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:36:35.848509  388420 machine.go:97] duration metric: took 4.134839636s to provisionDockerMachine
	I1115 10:36:35.848525  388420 start.go:293] postStartSetup for "default-k8s-diff-port-026691" (driver="docker")
	I1115 10:36:35.848541  388420 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:36:35.848616  388420 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:36:35.848671  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:35.868537  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:35.963605  388420 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:36:35.967175  388420 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:36:35.967199  388420 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:36:35.967209  388420 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/addons for local assets ...
	I1115 10:36:35.967263  388420 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/files for local assets ...
	I1115 10:36:35.967339  388420 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem -> 589622.pem in /etc/ssl/certs
	I1115 10:36:35.967422  388420 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:36:35.975404  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:36:35.992754  388420 start.go:296] duration metric: took 144.211835ms for postStartSetup
	I1115 10:36:35.992851  388420 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:36:35.992902  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:36.010853  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:36.106652  388420 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:36:36.111301  388420 fix.go:56] duration metric: took 4.732276816s for fixHost
	I1115 10:36:36.111327  388420 start.go:83] releasing machines lock for "default-k8s-diff-port-026691", held for 4.732326241s
	I1115 10:36:36.111401  388420 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-026691
	I1115 10:36:36.133087  388420 ssh_runner.go:195] Run: cat /version.json
	I1115 10:36:36.133147  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:36.133224  388420 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:36:36.133295  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:36.161597  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:36.162169  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:34.706341  387591 kubeadm.go:884] updating cluster {Name:newest-cni-086099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-086099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:36:34.706463  387591 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:36:34.706520  387591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:36:34.737832  387591 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:36:34.737871  387591 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:36:34.737929  387591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:36:34.765628  387591 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:36:34.765650  387591 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:36:34.765657  387591 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1115 10:36:34.765750  387591 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-086099 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-086099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:36:34.765813  387591 ssh_runner.go:195] Run: crio config
	I1115 10:36:34.812764  387591 cni.go:84] Creating CNI manager for ""
	I1115 10:36:34.812787  387591 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:36:34.812806  387591 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1115 10:36:34.812836  387591 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-086099 NodeName:newest-cni-086099 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:36:34.813018  387591 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-086099"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:36:34.813097  387591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:36:34.821514  387591 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:36:34.821582  387591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:36:34.829425  387591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1115 10:36:34.841803  387591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:36:34.854099  387591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1115 10:36:34.867123  387591 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:36:34.871300  387591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:36:34.882157  387591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:34.965624  387591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:36:34.991396  387591 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099 for IP: 192.168.103.2
	I1115 10:36:34.991421  387591 certs.go:195] generating shared ca certs ...
	I1115 10:36:34.991442  387591 certs.go:227] acquiring lock for ca certs: {Name:mkec8c0ab2b564f4fafe1f7fc86089efa994f6db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:34.991611  387591 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key
	I1115 10:36:34.991670  387591 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key
	I1115 10:36:34.991685  387591 certs.go:257] generating profile certs ...
	I1115 10:36:34.991800  387591 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/client.key
	I1115 10:36:34.991881  387591 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.key.d719cdad
	I1115 10:36:34.991938  387591 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.key
	I1115 10:36:34.992114  387591 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem (1338 bytes)
	W1115 10:36:34.992160  387591 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962_empty.pem, impossibly tiny 0 bytes
	I1115 10:36:34.992182  387591 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:36:34.992223  387591 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:36:34.992266  387591 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:36:34.992298  387591 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem (1679 bytes)
	I1115 10:36:34.992360  387591 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:36:34.993060  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:36:35.012346  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:36:35.032525  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:36:35.052616  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:36:35.116969  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1115 10:36:35.141400  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 10:36:35.160318  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:36:35.178367  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:36:35.231343  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /usr/share/ca-certificates/589622.pem (1708 bytes)
	I1115 10:36:35.251073  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:36:35.269574  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem --> /usr/share/ca-certificates/58962.pem (1338 bytes)
	I1115 10:36:35.287839  387591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:36:35.300609  387591 ssh_runner.go:195] Run: openssl version
	I1115 10:36:35.306757  387591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/58962.pem && ln -fs /usr/share/ca-certificates/58962.pem /etc/ssl/certs/58962.pem"
	I1115 10:36:35.315111  387591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/58962.pem
	I1115 10:36:35.318673  387591 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:47 /usr/share/ca-certificates/58962.pem
	I1115 10:36:35.318726  387591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/58962.pem
	I1115 10:36:35.352595  387591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/58962.pem /etc/ssl/certs/51391683.0"
	I1115 10:36:35.360661  387591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/589622.pem && ln -fs /usr/share/ca-certificates/589622.pem /etc/ssl/certs/589622.pem"
	I1115 10:36:35.369044  387591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/589622.pem
	I1115 10:36:35.373102  387591 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:47 /usr/share/ca-certificates/589622.pem
	I1115 10:36:35.373149  387591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/589622.pem
	I1115 10:36:35.407763  387591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/589622.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:36:35.416805  387591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:36:35.426105  387591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:35.429879  387591 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:41 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:35.429928  387591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:35.464376  387591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:36:35.472689  387591 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:36:35.476537  387591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 10:36:35.513422  387591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 10:36:35.552107  387591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 10:36:35.627892  387591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 10:36:35.738207  387591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 10:36:35.927631  387591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 10:36:36.020791  387591 kubeadm.go:401] StartCluster: {Name:newest-cni-086099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-086099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:36:36.020915  387591 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:36:36.020993  387591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:36:36.054712  387591 cri.go:89] found id: "38ec6363bcab12e98490c10ae199bee3e96e76c3b252eb09eaacbceb79f4fee3"
	I1115 10:36:36.054741  387591 cri.go:89] found id: "dcddb7cd9963badc91f7602efc0c7dd3a6aa66928df5288774cb992a0f211b2c"
	I1115 10:36:36.054748  387591 cri.go:89] found id: "938d8a7a407d113893b3543524a1f018292ab3c06ad38c37347f5c09b4f19aed"
	I1115 10:36:36.054753  387591 cri.go:89] found id: "6799daac297c17fbe94a59a4b23a00cc23bdfe7671f3f31f803b074be4ef25a5"
	I1115 10:36:36.054758  387591 cri.go:89] found id: ""
	I1115 10:36:36.054810  387591 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 10:36:36.122342  387591 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:36:36Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:36:36.122434  387591 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:36:36.132788  387591 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 10:36:36.132807  387591 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 10:36:36.132853  387591 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 10:36:36.144175  387591 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:36:36.145209  387591 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-086099" does not appear in /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:36:36.145870  387591 kubeconfig.go:62] /home/jenkins/minikube-integration/21894-55448/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-086099" cluster setting kubeconfig missing "newest-cni-086099" context setting]
	I1115 10:36:36.146847  387591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:36.149871  387591 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 10:36:36.217177  387591 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1115 10:36:36.217217  387591 kubeadm.go:602] duration metric: took 84.40299ms to restartPrimaryControlPlane
	I1115 10:36:36.217231  387591 kubeadm.go:403] duration metric: took 196.454161ms to StartCluster
	I1115 10:36:36.217253  387591 settings.go:142] acquiring lock: {Name:mk94cfac0f6eef2479181468a7a8082a6e5c8f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:36.217343  387591 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:36:36.218632  387591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:36.218872  387591 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:36:36.218972  387591 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:36:36.219074  387591 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-086099"
	I1115 10:36:36.219094  387591 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-086099"
	W1115 10:36:36.219105  387591 addons.go:248] addon storage-provisioner should already be in state true
	I1115 10:36:36.219138  387591 host.go:66] Checking if "newest-cni-086099" exists ...
	I1115 10:36:36.219158  387591 addons.go:70] Setting dashboard=true in profile "newest-cni-086099"
	I1115 10:36:36.219163  387591 config.go:182] Loaded profile config "newest-cni-086099": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:36.219193  387591 addons.go:239] Setting addon dashboard=true in "newest-cni-086099"
	W1115 10:36:36.219202  387591 addons.go:248] addon dashboard should already be in state true
	I1115 10:36:36.219217  387591 addons.go:70] Setting default-storageclass=true in profile "newest-cni-086099"
	I1115 10:36:36.219235  387591 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-086099"
	I1115 10:36:36.219248  387591 host.go:66] Checking if "newest-cni-086099" exists ...
	I1115 10:36:36.219557  387591 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:36:36.219712  387591 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:36:36.219712  387591 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:36:36.220680  387591 out.go:179] * Verifying Kubernetes components...
	I1115 10:36:36.221665  387591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:36.248161  387591 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:36:36.248172  387591 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1115 10:36:36.249608  387591 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:36:36.249628  387591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:36:36.249683  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:36.249733  387591 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1115 10:36:36.324481  388420 ssh_runner.go:195] Run: systemctl --version
	I1115 10:36:36.336623  388420 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:36:36.372576  388420 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:36:36.377572  388420 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:36:36.377633  388420 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:36:36.385687  388420 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 10:36:36.385710  388420 start.go:496] detecting cgroup driver to use...
	I1115 10:36:36.385740  388420 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:36:36.385776  388420 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:36:36.399728  388420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:36:36.411622  388420 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:36:36.411694  388420 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:36:36.431786  388420 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:36:36.449270  388420 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:36:36.538378  388420 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:36:36.622459  388420 docker.go:234] disabling docker service ...
	I1115 10:36:36.622563  388420 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:36:36.644022  388420 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:36:36.656349  388420 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:36:36.757453  388420 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:36:36.851752  388420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:36:36.864024  388420 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:36:36.878189  388420 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:36:36.878243  388420 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.886869  388420 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:36:36.886944  388420 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.895649  388420 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.904129  388420 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.912660  388420 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:36:36.922601  388420 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.934730  388420 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.945527  388420 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.955227  388420 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:36:36.962702  388420 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:36:36.969927  388420 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:37.064102  388420 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:36:37.181392  388420 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:36:37.181469  388420 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:36:37.185705  388420 start.go:564] Will wait 60s for crictl version
	I1115 10:36:37.185759  388420 ssh_runner.go:195] Run: which crictl
	I1115 10:36:37.189374  388420 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:36:37.214797  388420 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:36:37.214872  388420 ssh_runner.go:195] Run: crio --version
	I1115 10:36:37.247024  388420 ssh_runner.go:195] Run: crio --version
	I1115 10:36:37.283127  388420 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1115 10:36:35.246243  377744 pod_ready.go:104] pod "coredns-66bc5c9577-fjzk5" is not "Ready", error: <nil>
	I1115 10:36:37.246256  377744 pod_ready.go:94] pod "coredns-66bc5c9577-fjzk5" is "Ready"
	I1115 10:36:37.246283  377744 pod_ready.go:86] duration metric: took 33.505674032s for pod "coredns-66bc5c9577-fjzk5" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.248931  377744 pod_ready.go:83] waiting for pod "etcd-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.253449  377744 pod_ready.go:94] pod "etcd-embed-certs-719574" is "Ready"
	I1115 10:36:37.253477  377744 pod_ready.go:86] duration metric: took 4.523106ms for pod "etcd-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.258749  377744 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.262996  377744 pod_ready.go:94] pod "kube-apiserver-embed-certs-719574" is "Ready"
	I1115 10:36:37.263019  377744 pod_ready.go:86] duration metric: took 4.2473ms for pod "kube-apiserver-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.265400  377744 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.444138  377744 pod_ready.go:94] pod "kube-controller-manager-embed-certs-719574" is "Ready"
	I1115 10:36:37.444168  377744 pod_ready.go:86] duration metric: took 178.743562ms for pod "kube-controller-manager-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.644722  377744 pod_ready.go:83] waiting for pod "kube-proxy-kmc8c" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:38.044247  377744 pod_ready.go:94] pod "kube-proxy-kmc8c" is "Ready"
	I1115 10:36:38.044277  377744 pod_ready.go:86] duration metric: took 399.527336ms for pod "kube-proxy-kmc8c" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:38.245350  377744 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:38.644894  377744 pod_ready.go:94] pod "kube-scheduler-embed-certs-719574" is "Ready"
	I1115 10:36:38.645014  377744 pod_ready.go:86] duration metric: took 399.62796ms for pod "kube-scheduler-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:38.645030  377744 pod_ready.go:40] duration metric: took 34.90782271s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:36:38.702511  377744 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:36:38.706562  377744 out.go:179] * Done! kubectl is now configured to use "embed-certs-719574" cluster and "default" namespace by default
	I1115 10:36:37.284492  388420 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-026691 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:36:37.302095  388420 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1115 10:36:37.306321  388420 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:36:37.316768  388420 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-026691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-026691 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:36:37.316911  388420 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:36:37.316980  388420 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:36:37.354039  388420 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:36:37.354063  388420 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:36:37.354121  388420 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:36:37.384223  388420 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:36:37.384249  388420 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:36:37.384257  388420 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1115 10:36:37.384353  388420 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-026691 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-026691 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:36:37.384416  388420 ssh_runner.go:195] Run: crio config
	I1115 10:36:37.429588  388420 cni.go:84] Creating CNI manager for ""
	I1115 10:36:37.429616  388420 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:36:37.429637  388420 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:36:37.429663  388420 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-026691 NodeName:default-k8s-diff-port-026691 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:36:37.429840  388420 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-026691"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:36:37.429922  388420 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:36:37.438488  388420 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:36:37.438583  388420 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:36:37.446984  388420 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1115 10:36:37.459608  388420 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:36:37.472652  388420 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1115 10:36:37.484924  388420 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:36:37.488541  388420 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:36:37.498126  388420 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:37.587175  388420 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:36:37.609456  388420 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691 for IP: 192.168.85.2
	I1115 10:36:37.609480  388420 certs.go:195] generating shared ca certs ...
	I1115 10:36:37.609501  388420 certs.go:227] acquiring lock for ca certs: {Name:mkec8c0ab2b564f4fafe1f7fc86089efa994f6db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:37.609671  388420 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key
	I1115 10:36:37.609735  388420 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key
	I1115 10:36:37.609750  388420 certs.go:257] generating profile certs ...
	I1115 10:36:37.609859  388420 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/client.key
	I1115 10:36:37.609921  388420 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.key.f8824eec
	I1115 10:36:37.610007  388420 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.key
	I1115 10:36:37.610146  388420 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem (1338 bytes)
	W1115 10:36:37.610198  388420 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962_empty.pem, impossibly tiny 0 bytes
	I1115 10:36:37.610212  388420 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:36:37.610244  388420 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:36:37.610278  388420 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:36:37.610306  388420 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem (1679 bytes)
	I1115 10:36:37.610359  388420 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:36:37.611122  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:36:37.629925  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:36:37.650833  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:36:37.671862  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:36:37.696427  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1115 10:36:37.763348  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 10:36:37.782654  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:36:37.800720  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:36:37.817628  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:36:37.835327  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem --> /usr/share/ca-certificates/58962.pem (1338 bytes)
	I1115 10:36:37.856769  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /usr/share/ca-certificates/589622.pem (1708 bytes)
	I1115 10:36:37.876039  388420 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:36:37.891255  388420 ssh_runner.go:195] Run: openssl version
	I1115 10:36:37.898994  388420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/58962.pem && ln -fs /usr/share/ca-certificates/58962.pem /etc/ssl/certs/58962.pem"
	I1115 10:36:37.907571  388420 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/58962.pem
	I1115 10:36:37.912280  388420 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:47 /usr/share/ca-certificates/58962.pem
	I1115 10:36:37.912337  388420 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/58962.pem
	I1115 10:36:37.950692  388420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/58962.pem /etc/ssl/certs/51391683.0"
	I1115 10:36:37.959456  388420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/589622.pem && ln -fs /usr/share/ca-certificates/589622.pem /etc/ssl/certs/589622.pem"
	I1115 10:36:37.968450  388420 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/589622.pem
	I1115 10:36:37.972465  388420 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:47 /usr/share/ca-certificates/589622.pem
	I1115 10:36:37.972521  388420 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/589622.pem
	I1115 10:36:38.008129  388420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/589622.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:36:38.016745  388420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:36:38.027414  388420 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:38.031718  388420 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:41 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:38.031792  388420 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:38.077405  388420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:36:38.086004  388420 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:36:38.089990  388420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 10:36:38.127939  388420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 10:36:38.181791  388420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 10:36:38.256153  388420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 10:36:38.368577  388420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 10:36:38.543333  388420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 10:36:38.645754  388420 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-026691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-026691 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:36:38.645863  388420 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:36:38.645935  388420 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:36:38.685210  388420 cri.go:89] found id: "b04411b3a023360282fda35601339f2000829b38e465e5c4c130b7c58e111bb3"
	I1115 10:36:38.685237  388420 cri.go:89] found id: "971b8e4c2073b59c0dd66594f19cfc1d0b9f81cb1bad24b21946d5ea7b012768"
	I1115 10:36:38.685254  388420 cri.go:89] found id: "6a1db649ea51d34c5661db65312d1c8660828376983f865ce0b3d2801b219c2d"
	I1115 10:36:38.685259  388420 cri.go:89] found id: "58595dd2cf4ce1cb8f740a6594f3ee7a3c7d2587682b4e6c526266c0067303a7"
	I1115 10:36:38.685262  388420 cri.go:89] found id: ""
	I1115 10:36:38.685312  388420 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 10:36:38.750674  388420 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:36:38Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:36:38.750744  388420 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:36:38.769157  388420 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 10:36:38.769186  388420 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 10:36:38.769238  388420 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 10:36:38.842499  388420 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:36:38.845337  388420 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-026691" does not appear in /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:36:38.846840  388420 kubeconfig.go:62] /home/jenkins/minikube-integration/21894-55448/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-026691" cluster setting kubeconfig missing "default-k8s-diff-port-026691" context setting]
	I1115 10:36:38.849516  388420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:38.855210  388420 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 10:36:38.870026  388420 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1115 10:36:38.870059  388420 kubeadm.go:602] duration metric: took 100.86647ms to restartPrimaryControlPlane
	I1115 10:36:38.870073  388420 kubeadm.go:403] duration metric: took 224.328768ms to StartCluster
	I1115 10:36:38.870094  388420 settings.go:142] acquiring lock: {Name:mk94cfac0f6eef2479181468a7a8082a6e5c8f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:38.870172  388420 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:36:38.872536  388420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:38.872812  388420 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:36:38.873059  388420 config.go:182] Loaded profile config "default-k8s-diff-port-026691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:38.873024  388420 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:36:38.873181  388420 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-026691"
	I1115 10:36:38.873220  388420 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-026691"
	W1115 10:36:38.873240  388420 addons.go:248] addon storage-provisioner should already be in state true
	I1115 10:36:38.873315  388420 host.go:66] Checking if "default-k8s-diff-port-026691" exists ...
	I1115 10:36:38.873258  388420 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-026691"
	I1115 10:36:38.873640  388420 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-026691"
	W1115 10:36:38.873663  388420 addons.go:248] addon dashboard should already be in state true
	I1115 10:36:38.873444  388420 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-026691"
	I1115 10:36:38.873728  388420 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-026691"
	I1115 10:36:38.873753  388420 host.go:66] Checking if "default-k8s-diff-port-026691" exists ...
	I1115 10:36:38.874091  388420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:36:38.874589  388420 out.go:179] * Verifying Kubernetes components...
	I1115 10:36:38.874818  388420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:36:38.875168  388420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:36:38.876706  388420 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:38.907308  388420 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:36:38.907363  388420 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-026691"
	W1115 10:36:38.907464  388420 addons.go:248] addon default-storageclass should already be in state true
	I1115 10:36:38.907503  388420 host.go:66] Checking if "default-k8s-diff-port-026691" exists ...
	I1115 10:36:38.908043  388420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:36:38.912208  388420 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:36:38.912236  388420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:36:38.912295  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:38.915346  388420 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1115 10:36:38.916793  388420 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1115 10:36:36.250323  387591 addons.go:239] Setting addon default-storageclass=true in "newest-cni-086099"
	W1115 10:36:36.250350  387591 addons.go:248] addon default-storageclass should already be in state true
	I1115 10:36:36.250389  387591 host.go:66] Checking if "newest-cni-086099" exists ...
	I1115 10:36:36.251476  387591 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:36:36.255103  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1115 10:36:36.255128  387591 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1115 10:36:36.255190  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:36.278537  387591 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:36:36.278565  387591 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:36:36.278644  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:36.280814  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:36.281721  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:36.296440  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:36.630526  387591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:36:36.633566  387591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:36:36.636633  387591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:36:36.638099  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1115 10:36:36.638116  387591 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1115 10:36:36.724472  387591 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:36:36.724559  387591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:36:36.729948  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1115 10:36:36.730015  387591 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1115 10:36:36.826253  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1115 10:36:36.826282  387591 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1115 10:36:36.843537  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1115 10:36:36.843560  387591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1115 10:36:36.931895  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1115 10:36:36.931924  387591 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1115 10:36:36.945766  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1115 10:36:36.945791  387591 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1115 10:36:37.023562  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1115 10:36:37.023593  387591 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1115 10:36:37.038918  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1115 10:36:37.038944  387591 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1115 10:36:37.052909  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:36:37.052937  387591 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1115 10:36:37.119950  387591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:36:39.816288  387591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.182684264s)
	I1115 10:36:40.959315  387591 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.234727667s)
	I1115 10:36:40.959363  387591 api_server.go:72] duration metric: took 4.740464162s to wait for apiserver process to appear ...
	I1115 10:36:40.959371  387591 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:36:40.959395  387591 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1115 10:36:40.959325  387591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.322653976s)
	I1115 10:36:40.959440  387591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.839423734s)
	I1115 10:36:40.962518  387591 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-086099 addons enable metrics-server
	
	I1115 10:36:40.964092  387591 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1115 10:36:38.917819  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1115 10:36:38.917851  388420 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1115 10:36:38.917924  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:38.930932  388420 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:36:38.930982  388420 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:36:38.931053  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:38.933702  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:38.939670  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:38.960258  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:39.257807  388420 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:36:39.264707  388420 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:36:39.270235  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1115 10:36:39.270261  388420 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1115 10:36:39.274532  388420 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-026691" to be "Ready" ...
	I1115 10:36:39.351682  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1115 10:36:39.351725  388420 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1115 10:36:39.357989  388420 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:36:39.374984  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1115 10:36:39.375011  388420 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1115 10:36:39.457352  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1115 10:36:39.457377  388420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1115 10:36:39.542591  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1115 10:36:39.542618  388420 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1115 10:36:39.565925  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1115 10:36:39.566041  388420 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1115 10:36:39.580123  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1115 10:36:39.580242  388420 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1115 10:36:39.655102  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1115 10:36:39.655149  388420 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1115 10:36:39.669218  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:36:39.669246  388420 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1115 10:36:39.683183  388420 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:36:40.965416  387591 addons.go:515] duration metric: took 4.746465999s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1115 10:36:40.965454  387591 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:36:40.965477  387591 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:36:41.460167  387591 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1115 10:36:41.465475  387591 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1115 10:36:41.466642  387591 api_server.go:141] control plane version: v1.34.1
	I1115 10:36:41.466668  387591 api_server.go:131] duration metric: took 507.289044ms to wait for apiserver health ...
	I1115 10:36:41.466679  387591 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:36:41.470116  387591 system_pods.go:59] 8 kube-system pods found
	I1115 10:36:41.470165  387591 system_pods.go:61] "coredns-66bc5c9577-rblh2" [903029e0-3b15-43f3-836a-884de528cbc9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1115 10:36:41.470180  387591 system_pods.go:61] "etcd-newest-cni-086099" [6768a007-08a6-47b0-9917-cf54f577829b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:36:41.470190  387591 system_pods.go:61] "kindnet-2h7mm" [1b25f4e6-5f26-42ce-8ceb-56003682c785] Running
	I1115 10:36:41.470200  387591 system_pods.go:61] "kube-apiserver-newest-cni-086099" [3ca22829-f679-44bf-94e5-e4a368e13dcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:36:41.470210  387591 system_pods.go:61] "kube-controller-manager-newest-cni-086099" [1f45f32a-2d9e-49c0-9c69-d2aa59324564] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:36:41.470219  387591 system_pods.go:61] "kube-proxy-6jpzt" [7409c19f-472b-4074-81d0-8e43ac2bc9d4] Running
	I1115 10:36:41.470226  387591 system_pods.go:61] "kube-scheduler-newest-cni-086099" [c3510e0f-9b51-4fb5-bc6e-d0e47be8f5ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:36:41.470235  387591 system_pods.go:61] "storage-provisioner" [23166a3f-bb02-48ca-ab00-721c8c46525d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1115 10:36:41.470247  387591 system_pods.go:74] duration metric: took 3.560608ms to wait for pod list to return data ...
	I1115 10:36:41.470262  387591 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:36:41.472726  387591 default_sa.go:45] found service account: "default"
	I1115 10:36:41.472751  387591 default_sa.go:55] duration metric: took 2.478273ms for default service account to be created ...
	I1115 10:36:41.472765  387591 kubeadm.go:587] duration metric: took 5.253867745s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1115 10:36:41.472786  387591 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:36:41.475250  387591 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:36:41.475273  387591 node_conditions.go:123] node cpu capacity is 8
	I1115 10:36:41.475284  387591 node_conditions.go:105] duration metric: took 2.490696ms to run NodePressure ...
	I1115 10:36:41.475297  387591 start.go:242] waiting for startup goroutines ...
	I1115 10:36:41.475306  387591 start.go:247] waiting for cluster config update ...
	I1115 10:36:41.475322  387591 start.go:256] writing updated cluster config ...
	I1115 10:36:41.475622  387591 ssh_runner.go:195] Run: rm -f paused
	I1115 10:36:41.529383  387591 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:36:41.531753  387591 out.go:179] * Done! kubectl is now configured to use "newest-cni-086099" cluster and "default" namespace by default
	I1115 10:36:42.149798  388420 node_ready.go:49] node "default-k8s-diff-port-026691" is "Ready"
	I1115 10:36:42.149832  388420 node_ready.go:38] duration metric: took 2.87526393s for node "default-k8s-diff-port-026691" to be "Ready" ...
	I1115 10:36:42.149851  388420 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:36:42.149915  388420 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:36:43.654191  388420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.38943226s)
	I1115 10:36:43.654229  388420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.29621492s)
	I1115 10:36:43.654402  388420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.971169317s)
	I1115 10:36:43.654437  388420 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.50449925s)
	I1115 10:36:43.654474  388420 api_server.go:72] duration metric: took 4.78163246s to wait for apiserver process to appear ...
	I1115 10:36:43.654482  388420 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:36:43.654504  388420 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1115 10:36:43.655988  388420 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-026691 addons enable metrics-server
	
	I1115 10:36:43.659469  388420 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:36:43.659501  388420 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:36:43.660788  388420 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1115 10:36:43.661827  388420 addons.go:515] duration metric: took 4.788813528s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1115 10:36:44.155099  388420 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1115 10:36:44.160271  388420 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1115 10:36:44.161286  388420 api_server.go:141] control plane version: v1.34.1
	I1115 10:36:44.161316  388420 api_server.go:131] duration metric: took 506.825578ms to wait for apiserver health ...
	I1115 10:36:44.161327  388420 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:36:44.164559  388420 system_pods.go:59] 8 kube-system pods found
	I1115 10:36:44.164606  388420 system_pods.go:61] "coredns-66bc5c9577-5q2j4" [e6c4ca54-e0fe-45ee-88a6-33bdccbb876c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:36:44.164622  388420 system_pods.go:61] "etcd-default-k8s-diff-port-026691" [f8228efa-91a3-41fb-9952-3b4063ddf162] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:36:44.164631  388420 system_pods.go:61] "kindnet-hjdrk" [9e1f7579-f5a2-44cd-b77f-71219cd8827d] Running
	I1115 10:36:44.164645  388420 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-026691" [9da86b3b-bfcd-4c40-ac86-ee9d9faeb9b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:36:44.164658  388420 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-026691" [23d9be87-1de1-4c38-b65e-b084cf0fed25] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:36:44.164667  388420 system_pods.go:61] "kube-proxy-c5bw5" [ee48d34b-ae60-4a03-a7bd-df76e089eebb] Running
	I1115 10:36:44.164677  388420 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-026691" [52b3711f-015c-4af6-8672-58642cbc0c16] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:36:44.164686  388420 system_pods.go:61] "storage-provisioner" [7dedf7a9-415d-4260-b225-7ca171744768] Running
	I1115 10:36:44.164696  388420 system_pods.go:74] duration metric: took 3.356326ms to wait for pod list to return data ...
	I1115 10:36:44.164709  388420 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:36:44.166570  388420 default_sa.go:45] found service account: "default"
	I1115 10:36:44.166593  388420 default_sa.go:55] duration metric: took 1.872347ms for default service account to be created ...
	I1115 10:36:44.166603  388420 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:36:44.169425  388420 system_pods.go:86] 8 kube-system pods found
	I1115 10:36:44.169450  388420 system_pods.go:89] "coredns-66bc5c9577-5q2j4" [e6c4ca54-e0fe-45ee-88a6-33bdccbb876c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:36:44.169459  388420 system_pods.go:89] "etcd-default-k8s-diff-port-026691" [f8228efa-91a3-41fb-9952-3b4063ddf162] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:36:44.169467  388420 system_pods.go:89] "kindnet-hjdrk" [9e1f7579-f5a2-44cd-b77f-71219cd8827d] Running
	I1115 10:36:44.169472  388420 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-026691" [9da86b3b-bfcd-4c40-ac86-ee9d9faeb9b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:36:44.169482  388420 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-026691" [23d9be87-1de1-4c38-b65e-b084cf0fed25] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:36:44.169497  388420 system_pods.go:89] "kube-proxy-c5bw5" [ee48d34b-ae60-4a03-a7bd-df76e089eebb] Running
	I1115 10:36:44.169512  388420 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-026691" [52b3711f-015c-4af6-8672-58642cbc0c16] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:36:44.169521  388420 system_pods.go:89] "storage-provisioner" [7dedf7a9-415d-4260-b225-7ca171744768] Running
	I1115 10:36:44.169532  388420 system_pods.go:126] duration metric: took 2.922555ms to wait for k8s-apps to be running ...
	I1115 10:36:44.169541  388420 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:36:44.169593  388420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:36:44.183310  388420 system_svc.go:56] duration metric: took 13.759187ms WaitForService to wait for kubelet
	I1115 10:36:44.183342  388420 kubeadm.go:587] duration metric: took 5.310501278s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:36:44.183366  388420 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:36:44.186800  388420 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:36:44.186826  388420 node_conditions.go:123] node cpu capacity is 8
	I1115 10:36:44.186843  388420 node_conditions.go:105] duration metric: took 3.463462ms to run NodePressure ...
	I1115 10:36:44.186859  388420 start.go:242] waiting for startup goroutines ...
	I1115 10:36:44.186872  388420 start.go:247] waiting for cluster config update ...
	I1115 10:36:44.186896  388420 start.go:256] writing updated cluster config ...
	I1115 10:36:44.187247  388420 ssh_runner.go:195] Run: rm -f paused
	I1115 10:36:44.191349  388420 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:36:44.194864  388420 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5q2j4" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 10:36:46.200419  388420 pod_ready.go:104] pod "coredns-66bc5c9577-5q2j4" is not "Ready", error: <nil>
	W1115 10:36:48.202278  388420 pod_ready.go:104] pod "coredns-66bc5c9577-5q2j4" is not "Ready", error: <nil>
	W1115 10:36:50.700646  388420 pod_ready.go:104] pod "coredns-66bc5c9577-5q2j4" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 15 10:36:42 embed-certs-719574 crio[683]: time="2025-11-15T10:36:42.955556653Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:36:42 embed-certs-719574 crio[683]: time="2025-11-15T10:36:42.962918816Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:36:42 embed-certs-719574 crio[683]: time="2025-11-15T10:36:42.963019392Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:36:42 embed-certs-719574 crio[683]: time="2025-11-15T10:36:42.963052207Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:36:42 embed-certs-719574 crio[683]: time="2025-11-15T10:36:42.967912794Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:36:42 embed-certs-719574 crio[683]: time="2025-11-15T10:36:42.968355595Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:36:42 embed-certs-719574 crio[683]: time="2025-11-15T10:36:42.96848079Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:36:42 embed-certs-719574 crio[683]: time="2025-11-15T10:36:42.973199948Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:36:42 embed-certs-719574 crio[683]: time="2025-11-15T10:36:42.973226606Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:36:42 embed-certs-719574 crio[683]: time="2025-11-15T10:36:42.973250345Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:36:42 embed-certs-719574 crio[683]: time="2025-11-15T10:36:42.977375806Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:36:42 embed-certs-719574 crio[683]: time="2025-11-15T10:36:42.97781242Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:36:42 embed-certs-719574 crio[683]: time="2025-11-15T10:36:42.977851581Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:36:42 embed-certs-719574 crio[683]: time="2025-11-15T10:36:42.983376268Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:36:42 embed-certs-719574 crio[683]: time="2025-11-15T10:36:42.983404126Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:36:50 embed-certs-719574 crio[683]: time="2025-11-15T10:36:50.883021144Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6370495a-7a2c-4415-ba6c-8042137c8168 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:36:50 embed-certs-719574 crio[683]: time="2025-11-15T10:36:50.884050271Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=205796f1-51f1-424d-b738-80103f7b69e6 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:36:50 embed-certs-719574 crio[683]: time="2025-11-15T10:36:50.885163545Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vknb8/dashboard-metrics-scraper" id=e955ad95-fb48-4a24-b352-e7d7fbd8f3cb name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:36:50 embed-certs-719574 crio[683]: time="2025-11-15T10:36:50.885316374Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:50 embed-certs-719574 crio[683]: time="2025-11-15T10:36:50.893680665Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:50 embed-certs-719574 crio[683]: time="2025-11-15T10:36:50.894469525Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:50 embed-certs-719574 crio[683]: time="2025-11-15T10:36:50.919592624Z" level=info msg="Created container 3ead158324196d73b353abf0dbc9c77d451a7d90a3b992875227cdf4ff71cadf: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vknb8/dashboard-metrics-scraper" id=e955ad95-fb48-4a24-b352-e7d7fbd8f3cb name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:36:50 embed-certs-719574 crio[683]: time="2025-11-15T10:36:50.920230218Z" level=info msg="Starting container: 3ead158324196d73b353abf0dbc9c77d451a7d90a3b992875227cdf4ff71cadf" id=0dca07c5-631c-45fb-94f3-e1b356fdea0e name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:36:50 embed-certs-719574 crio[683]: time="2025-11-15T10:36:50.922228703Z" level=info msg="Started container" PID=1976 containerID=3ead158324196d73b353abf0dbc9c77d451a7d90a3b992875227cdf4ff71cadf description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vknb8/dashboard-metrics-scraper id=0dca07c5-631c-45fb-94f3-e1b356fdea0e name=/runtime.v1.RuntimeService/StartContainer sandboxID=ec80aacc8d40afae47d87261d086b26caf7baaa7028295fb5e42a8808fff30d5
	Nov 15 10:36:50 embed-certs-719574 conmon[1974]: conmon 3ead158324196d73b353 <ninfo>: container 1976 exited with status 1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	3ead158324196       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           3 seconds ago       Exited              dashboard-metrics-scraper   3                   ec80aacc8d40a       dashboard-metrics-scraper-6ffb444bf9-vknb8   kubernetes-dashboard
	fb08cb8a4d59b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         2                   ac40d5f35252f       storage-provisioner                          kube-system
	47d1f78e14958       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago      Exited              dashboard-metrics-scraper   2                   ec80aacc8d40a       dashboard-metrics-scraper-6ffb444bf9-vknb8   kubernetes-dashboard
	6587299f75a35       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   38 seconds ago      Running             kubernetes-dashboard        0                   a5da08dc35ea4       kubernetes-dashboard-855c9754f9-tj9l5        kubernetes-dashboard
	2b7fc8178ede9       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           51 seconds ago      Running             coredns                     1                   2eafe2bafc6d0       coredns-66bc5c9577-fjzk5                     kube-system
	f54cad1c6353f       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   8a7b25048e462       busybox                                      default
	10b6f8a418fda       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         1                   ac40d5f35252f       storage-provisioner                          kube-system
	f676bcf138c32       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           51 seconds ago      Running             kube-proxy                  1                   9d0e5248f3627       kube-proxy-kmc8c                             kube-system
	7fc2fdf9c30a8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 1                   ed7a712085e5d       kindnet-ql2r4                                kube-system
	34a183c86eaa1       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           55 seconds ago      Running             kube-controller-manager     1                   16d81d004d85b       kube-controller-manager-embed-certs-719574   kube-system
	a04037d02e2f1       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           55 seconds ago      Running             kube-scheduler              1                   20213b61f1710       kube-scheduler-embed-certs-719574            kube-system
	d2523d5b7384a       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           55 seconds ago      Running             kube-apiserver              1                   3b4646742c423       kube-apiserver-embed-certs-719574            kube-system
	56627175c47b8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           55 seconds ago      Running             etcd                        1                   bd879d9864c53       etcd-embed-certs-719574                      kube-system
	
	
	==> coredns [2b7fc8178ede99bd2bac3d421353e5930c25042c3ada59734b1a0b0847235087] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40794 - 10169 "HINFO IN 5033773871326012940.3699568236983148320. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.015344083s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-719574
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-719574
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=embed-certs-719574
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_35_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:35:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-719574
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:36:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:36:33 +0000   Sat, 15 Nov 2025 10:34:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:36:33 +0000   Sat, 15 Nov 2025 10:34:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:36:33 +0000   Sat, 15 Nov 2025 10:34:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:36:33 +0000   Sat, 15 Nov 2025 10:35:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-719574
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                4a98aacb-8676-41cf-a57c-20957fa3757b
	  Boot ID:                    6a33f504-3a8c-4d49-8fc5-301cf5275efc
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-fjzk5                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-embed-certs-719574                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-ql2r4                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-embed-certs-719574             250m (3%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-embed-certs-719574    200m (2%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-kmc8c                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-embed-certs-719574             100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-vknb8    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-tj9l5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 104s               kube-proxy       
	  Normal   Starting                 51s                kube-proxy       
	  Warning  CgroupV1                 2m1s               kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m (x9 over 2m1s)  kubelet          Node embed-certs-719574 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m (x8 over 2m1s)  kubelet          Node embed-certs-719574 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m (x7 over 2m1s)  kubelet          Node embed-certs-719574 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  111s               kubelet          Node embed-certs-719574 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 111s               kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    111s               kubelet          Node embed-certs-719574 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     111s               kubelet          Node embed-certs-719574 status is now: NodeHasSufficientPID
	  Normal   Starting                 111s               kubelet          Starting kubelet.
	  Normal   RegisteredNode           107s               node-controller  Node embed-certs-719574 event: Registered Node embed-certs-719574 in Controller
	  Normal   NodeReady                94s                kubelet          Node embed-certs-719574 status is now: NodeReady
	  Normal   Starting                 57s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 57s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  57s (x8 over 57s)  kubelet          Node embed-certs-719574 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    57s (x8 over 57s)  kubelet          Node embed-certs-719574 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     57s (x8 over 57s)  kubelet          Node embed-certs-719574 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           49s                node-controller  Node embed-certs-719574 event: Registered Node embed-certs-719574 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 7f 53 f9 15 93 08 06
	[  +0.017821] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff c6 66 25 e9 45 28 08 06
	[ +19.178294] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 a3 65 b9 28 15 08 06
	[  +0.347929] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 6d df 33 a5 ad 08 06
	[  +0.000319] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff a2 7f 53 f9 15 93 08 06
	[Nov15 10:33] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 96 ed 10 e8 9a 80 08 06
	[  +0.000481] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 a3 65 b9 28 15 08 06
	[ +35.214217] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 c2 9a 56 52 0c 08 06
	[  +9.104720] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 c2 c4 d1 7c dd 08 06
	[Nov15 10:34] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 68 1e 80 53 05 08 06
	[  +0.000444] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 c2 c4 d1 7c dd 08 06
	[ +18.836046] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 98 db 78 4f 0b 08 06
	[  +0.000708] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 c2 9a 56 52 0c 08 06
	
	
	==> etcd [56627175c47b813bf4460ca13a901ec3152cbb4d22f0362e40133db8b19b3e87] <==
	{"level":"warn","ts":"2025-11-15T10:36:00.768874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.775332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.781195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.789438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.847231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.853478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.859797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.865997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.872416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.885803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.897068Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.903095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.909779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.915788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.951289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.959021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.965832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.973884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.980978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.987712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.994606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.043851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.051590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.058822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.066719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53790","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:36:54 up  2:19,  0 user,  load average: 3.15, 4.11, 2.77
	Linux embed-certs-719574 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7fc2fdf9c30a8cc273f623029958f218943ba5c78f1f3342ad2488e439a98294] <==
	I1115 10:36:02.747841       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:36:02.748130       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1115 10:36:02.748319       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:36:02.748336       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:36:02.748358       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:36:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:36:03.049045       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:36:03.049101       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:36:03.049116       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:36:03.049306       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1115 10:36:33.046876       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1115 10:36:33.052824       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1115 10:36:33.053169       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1115 10:36:33.146203       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1115 10:36:34.049575       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:36:34.049606       1 metrics.go:72] Registering metrics
	I1115 10:36:34.050037       1 controller.go:711] "Syncing nftables rules"
	I1115 10:36:42.955126       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1115 10:36:42.955231       1 main.go:301] handling current node
	I1115 10:36:52.961055       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1115 10:36:52.961095       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d2523d5b7384ac5c1a3b025b86aa00148daa68b776dfada0c3e9b0dd751d444c] <==
	I1115 10:36:01.845928       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 10:36:01.846469       1 aggregator.go:171] initial CRD sync complete...
	I1115 10:36:01.846490       1 autoregister_controller.go:144] Starting autoregister controller
	I1115 10:36:01.846497       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 10:36:01.846503       1 cache.go:39] Caches are synced for autoregister controller
	I1115 10:36:01.846666       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1115 10:36:01.846675       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1115 10:36:01.846729       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1115 10:36:01.846796       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1115 10:36:01.848182       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 10:36:01.850996       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1115 10:36:01.851090       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1115 10:36:01.869848       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1115 10:36:01.971096       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 10:36:02.752896       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:36:03.267868       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 10:36:03.370523       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 10:36:03.449568       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:36:03.457880       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:36:03.563667       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.213.75"}
	I1115 10:36:03.575921       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.109.246"}
	I1115 10:36:05.329987       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 10:36:05.578417       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:36:05.628441       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 10:36:05.628441       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [34a183c86eaa1af613ae4708887513840319366cbb4179e7f1678698297eade2] <==
	I1115 10:36:05.174299       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 10:36:05.174362       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1115 10:36:05.174508       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 10:36:05.175235       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1115 10:36:05.175357       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1115 10:36:05.175514       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 10:36:05.175585       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1115 10:36:05.175589       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:36:05.175706       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 10:36:05.175714       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1115 10:36:05.176303       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1115 10:36:05.178542       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1115 10:36:05.178688       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1115 10:36:05.179846       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1115 10:36:05.179940       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1115 10:36:05.180094       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1115 10:36:05.180113       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1115 10:36:05.180182       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1115 10:36:05.180351       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1115 10:36:05.182193       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:36:05.183398       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 10:36:05.185029       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1115 10:36:05.187113       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 10:36:05.194912       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1115 10:36:05.203199       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [f676bcf138c32c1e2f79a1401bcec6579bb0e86468d1bbfa5fa8782637358ec9] <==
	I1115 10:36:02.664777       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:36:02.846298       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:36:02.948784       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:36:02.948900       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1115 10:36:02.949155       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:36:03.052015       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:36:03.052180       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:36:03.061225       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:36:03.061793       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:36:03.061829       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:36:03.063698       1 config.go:200] "Starting service config controller"
	I1115 10:36:03.063769       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:36:03.063816       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:36:03.063823       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:36:03.063838       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:36:03.063843       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:36:03.064333       1 config.go:309] "Starting node config controller"
	I1115 10:36:03.064369       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:36:03.164560       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 10:36:03.164617       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 10:36:03.164648       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 10:36:03.164891       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [a04037d02e2f108f2bc0bc86b223e37c201372e3009a0b55923609e9c3f5b7b5] <==
	I1115 10:35:59.564973       1 serving.go:386] Generated self-signed cert in-memory
	W1115 10:36:01.752886       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1115 10:36:01.753212       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1115 10:36:01.753378       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1115 10:36:01.754418       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1115 10:36:01.865585       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1115 10:36:01.865769       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:36:01.877894       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:36:01.878035       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:36:01.880089       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 10:36:01.880216       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 10:36:01.978192       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 10:36:05 embed-certs-719574 kubelet[844]: I1115 10:36:05.891486     844 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4033c9a9-052a-4725-a759-cefe2f0c9a8a-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-vknb8\" (UID: \"4033c9a9-052a-4725-a759-cefe2f0c9a8a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vknb8"
	Nov 15 10:36:05 embed-certs-719574 kubelet[844]: I1115 10:36:05.891502     844 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pqrc\" (UniqueName: \"kubernetes.io/projected/4033c9a9-052a-4725-a759-cefe2f0c9a8a-kube-api-access-2pqrc\") pod \"dashboard-metrics-scraper-6ffb444bf9-vknb8\" (UID: \"4033c9a9-052a-4725-a759-cefe2f0c9a8a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vknb8"
	Nov 15 10:36:06 embed-certs-719574 kubelet[844]: W1115 10:36:06.125495     844 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/77b854d73395a6882ad846c6f51d5c84a63e9af66dc48bfe4bdf582432e5fa0b/crio-ec80aacc8d40afae47d87261d086b26caf7baaa7028295fb5e42a8808fff30d5 WatchSource:0}: Error finding container ec80aacc8d40afae47d87261d086b26caf7baaa7028295fb5e42a8808fff30d5: Status 404 returned error can't find the container with id ec80aacc8d40afae47d87261d086b26caf7baaa7028295fb5e42a8808fff30d5
	Nov 15 10:36:06 embed-certs-719574 kubelet[844]: W1115 10:36:06.126030     844 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/77b854d73395a6882ad846c6f51d5c84a63e9af66dc48bfe4bdf582432e5fa0b/crio-a5da08dc35ea4e5faf219762cb09085970940a36e653ddd181b6f7a2dbda2fcf WatchSource:0}: Error finding container a5da08dc35ea4e5faf219762cb09085970940a36e653ddd181b6f7a2dbda2fcf: Status 404 returned error can't find the container with id a5da08dc35ea4e5faf219762cb09085970940a36e653ddd181b6f7a2dbda2fcf
	Nov 15 10:36:06 embed-certs-719574 kubelet[844]: I1115 10:36:06.975624     844 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 15 10:36:10 embed-certs-719574 kubelet[844]: I1115 10:36:10.065579     844 scope.go:117] "RemoveContainer" containerID="9b028c55602adb5be59715c394432d21750e221d9449cfb4f669756ceda768e3"
	Nov 15 10:36:11 embed-certs-719574 kubelet[844]: I1115 10:36:11.070437     844 scope.go:117] "RemoveContainer" containerID="9b028c55602adb5be59715c394432d21750e221d9449cfb4f669756ceda768e3"
	Nov 15 10:36:11 embed-certs-719574 kubelet[844]: I1115 10:36:11.070614     844 scope.go:117] "RemoveContainer" containerID="249a33fbd4befd2570092d53aa7d4841660505a2c65352a0c18a1c74ee3be3c9"
	Nov 15 10:36:11 embed-certs-719574 kubelet[844]: E1115 10:36:11.070805     844 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vknb8_kubernetes-dashboard(4033c9a9-052a-4725-a759-cefe2f0c9a8a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vknb8" podUID="4033c9a9-052a-4725-a759-cefe2f0c9a8a"
	Nov 15 10:36:12 embed-certs-719574 kubelet[844]: I1115 10:36:12.075500     844 scope.go:117] "RemoveContainer" containerID="249a33fbd4befd2570092d53aa7d4841660505a2c65352a0c18a1c74ee3be3c9"
	Nov 15 10:36:12 embed-certs-719574 kubelet[844]: E1115 10:36:12.075709     844 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vknb8_kubernetes-dashboard(4033c9a9-052a-4725-a759-cefe2f0c9a8a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vknb8" podUID="4033c9a9-052a-4725-a759-cefe2f0c9a8a"
	Nov 15 10:36:16 embed-certs-719574 kubelet[844]: I1115 10:36:16.098503     844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tj9l5" podStartSLOduration=1.731529531 podStartE2EDuration="11.09847942s" podCreationTimestamp="2025-11-15 10:36:05 +0000 UTC" firstStartedPulling="2025-11-15 10:36:06.128641375 +0000 UTC m=+8.394806958" lastFinishedPulling="2025-11-15 10:36:15.495591281 +0000 UTC m=+17.761756847" observedRunningTime="2025-11-15 10:36:16.098202322 +0000 UTC m=+18.364367907" watchObservedRunningTime="2025-11-15 10:36:16.09847942 +0000 UTC m=+18.364645006"
	Nov 15 10:36:18 embed-certs-719574 kubelet[844]: I1115 10:36:18.290617     844 scope.go:117] "RemoveContainer" containerID="249a33fbd4befd2570092d53aa7d4841660505a2c65352a0c18a1c74ee3be3c9"
	Nov 15 10:36:18 embed-certs-719574 kubelet[844]: E1115 10:36:18.290800     844 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vknb8_kubernetes-dashboard(4033c9a9-052a-4725-a759-cefe2f0c9a8a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vknb8" podUID="4033c9a9-052a-4725-a759-cefe2f0c9a8a"
	Nov 15 10:36:29 embed-certs-719574 kubelet[844]: I1115 10:36:29.882323     844 scope.go:117] "RemoveContainer" containerID="249a33fbd4befd2570092d53aa7d4841660505a2c65352a0c18a1c74ee3be3c9"
	Nov 15 10:36:30 embed-certs-719574 kubelet[844]: I1115 10:36:30.121924     844 scope.go:117] "RemoveContainer" containerID="249a33fbd4befd2570092d53aa7d4841660505a2c65352a0c18a1c74ee3be3c9"
	Nov 15 10:36:30 embed-certs-719574 kubelet[844]: I1115 10:36:30.122156     844 scope.go:117] "RemoveContainer" containerID="47d1f78e1495806bd1e6bc6db543ba9f35ac22c92ea194e3742c7b61fc3a2da3"
	Nov 15 10:36:30 embed-certs-719574 kubelet[844]: E1115 10:36:30.122374     844 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vknb8_kubernetes-dashboard(4033c9a9-052a-4725-a759-cefe2f0c9a8a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vknb8" podUID="4033c9a9-052a-4725-a759-cefe2f0c9a8a"
	Nov 15 10:36:33 embed-certs-719574 kubelet[844]: I1115 10:36:33.132401     844 scope.go:117] "RemoveContainer" containerID="10b6f8a418fda8c0a43012d2666abd1f951f96d5c6a2315faf72d61ccd752d2f"
	Nov 15 10:36:38 embed-certs-719574 kubelet[844]: I1115 10:36:38.290508     844 scope.go:117] "RemoveContainer" containerID="47d1f78e1495806bd1e6bc6db543ba9f35ac22c92ea194e3742c7b61fc3a2da3"
	Nov 15 10:36:38 embed-certs-719574 kubelet[844]: E1115 10:36:38.290680     844 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vknb8_kubernetes-dashboard(4033c9a9-052a-4725-a759-cefe2f0c9a8a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vknb8" podUID="4033c9a9-052a-4725-a759-cefe2f0c9a8a"
	Nov 15 10:36:50 embed-certs-719574 kubelet[844]: I1115 10:36:50.882416     844 scope.go:117] "RemoveContainer" containerID="47d1f78e1495806bd1e6bc6db543ba9f35ac22c92ea194e3742c7b61fc3a2da3"
	Nov 15 10:36:50 embed-certs-719574 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 10:36:50 embed-certs-719574 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 10:36:50 embed-certs-719574 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [6587299f75a35df796ecc6b64e4a4ce75d90dc27bf9c4ef271dc17d17c347b48] <==
	2025/11/15 10:36:15 Starting overwatch
	2025/11/15 10:36:15 Using namespace: kubernetes-dashboard
	2025/11/15 10:36:15 Using in-cluster config to connect to apiserver
	2025/11/15 10:36:15 Using secret token for csrf signing
	2025/11/15 10:36:15 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/15 10:36:15 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/15 10:36:15 Successful initial request to the apiserver, version: v1.34.1
	2025/11/15 10:36:15 Generating JWE encryption key
	2025/11/15 10:36:15 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/15 10:36:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/15 10:36:15 Initializing JWE encryption key from synchronized object
	2025/11/15 10:36:15 Creating in-cluster Sidecar client
	2025/11/15 10:36:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 10:36:15 Serving insecurely on HTTP port: 9090
	2025/11/15 10:36:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [10b6f8a418fda8c0a43012d2666abd1f951f96d5c6a2315faf72d61ccd752d2f] <==
	I1115 10:36:02.575597       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1115 10:36:32.646645       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [fb08cb8a4d59b7ee225bd83ec883701f5430ec14c7bf4ecd1bbfd4dc422ad397] <==
	I1115 10:36:33.182285       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 10:36:33.189535       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 10:36:33.189575       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1115 10:36:33.192034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:36.647348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:40.907268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:44.505976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:47.561004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:50.583280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:50.600940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:36:50.601152       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 10:36:50.601269       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"40f0e3ae-7c7f-492f-ba67-375413ad6bff", APIVersion:"v1", ResourceVersion:"675", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-719574_6b94c6a6-9f0d-484a-88f9-f71427654633 became leader
	I1115 10:36:50.601357       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-719574_6b94c6a6-9f0d-484a-88f9-f71427654633!
	W1115 10:36:50.615294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:50.618735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:36:50.701784       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-719574_6b94c6a6-9f0d-484a-88f9-f71427654633!
	W1115 10:36:52.622063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:52.626567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:54.630286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:54.634040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-719574 -n embed-certs-719574
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-719574 -n embed-certs-719574: exit status 2 (377.166089ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-719574 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-719574
helpers_test.go:243: (dbg) docker inspect embed-certs-719574:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "77b854d73395a6882ad846c6f51d5c84a63e9af66dc48bfe4bdf582432e5fa0b",
	        "Created": "2025-11-15T10:34:39.190268884Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 377946,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:35:49.857301499Z",
	            "FinishedAt": "2025-11-15T10:35:48.927246994Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/77b854d73395a6882ad846c6f51d5c84a63e9af66dc48bfe4bdf582432e5fa0b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/77b854d73395a6882ad846c6f51d5c84a63e9af66dc48bfe4bdf582432e5fa0b/hostname",
	        "HostsPath": "/var/lib/docker/containers/77b854d73395a6882ad846c6f51d5c84a63e9af66dc48bfe4bdf582432e5fa0b/hosts",
	        "LogPath": "/var/lib/docker/containers/77b854d73395a6882ad846c6f51d5c84a63e9af66dc48bfe4bdf582432e5fa0b/77b854d73395a6882ad846c6f51d5c84a63e9af66dc48bfe4bdf582432e5fa0b-json.log",
	        "Name": "/embed-certs-719574",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-719574:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-719574",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "77b854d73395a6882ad846c6f51d5c84a63e9af66dc48bfe4bdf582432e5fa0b",
	                "LowerDir": "/var/lib/docker/overlay2/d78394fc0232db571aad8803b55daf0d7b8a72cdb47dfbbeba9dc3f9e8f3bfaa-init/diff:/var/lib/docker/overlay2/507c85b6fb43d9c98216fac79d7a8d08dd20b2a63d0fbfc758336e5d38c04044/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d78394fc0232db571aad8803b55daf0d7b8a72cdb47dfbbeba9dc3f9e8f3bfaa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d78394fc0232db571aad8803b55daf0d7b8a72cdb47dfbbeba9dc3f9e8f3bfaa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d78394fc0232db571aad8803b55daf0d7b8a72cdb47dfbbeba9dc3f9e8f3bfaa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-719574",
	                "Source": "/var/lib/docker/volumes/embed-certs-719574/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-719574",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-719574",
	                "name.minikube.sigs.k8s.io": "embed-certs-719574",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "86a9edf64b01af8ecc8bab1479a6ea391424ffc25cb059062dd31baa205f6d3e",
	            "SandboxKey": "/var/run/docker/netns/86a9edf64b01",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-719574": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5402d8c1e78ae31835e502183d61451b5187ae582db12fcffbcfeece1b73ea7c",
	                    "EndpointID": "9d50f5e25f46c5221d1abd247692d7fb156d6bf660627e3691a0eedd0bab993d",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "aa:f7:cd:af:61:5f",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-719574",
	                        "77b854d73395"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-719574 -n embed-certs-719574
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-719574 -n embed-certs-719574: exit status 2 (345.169944ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-719574 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-719574 logs -n 25: (1.138286803s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ old-k8s-version-087235 image list --format=json                                                                                                                                                                                               │ old-k8s-version-087235       │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ pause   │ -p old-k8s-version-087235 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-087235       │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ delete  │ -p old-k8s-version-087235                                                                                                                                                                                                                     │ old-k8s-version-087235       │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ addons  │ enable dashboard -p embed-certs-719574 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ start   │ -p embed-certs-719574 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:36 UTC │
	│ delete  │ -p old-k8s-version-087235                                                                                                                                                                                                                     │ old-k8s-version-087235       │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ start   │ -p newest-cni-086099 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:36 UTC │
	│ image   │ no-preload-283677 image list --format=json                                                                                                                                                                                                    │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ pause   │ -p no-preload-283677 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ delete  │ -p no-preload-283677                                                                                                                                                                                                                          │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ delete  │ -p no-preload-283677                                                                                                                                                                                                                          │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-026691 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-026691 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-026691 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-026691 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ addons  │ enable metrics-server -p newest-cni-086099 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ stop    │ -p newest-cni-086099 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ addons  │ enable dashboard -p newest-cni-086099 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ start   │ -p newest-cni-086099 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-026691 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-026691 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ start   │ -p default-k8s-diff-port-026691 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-026691 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ image   │ newest-cni-086099 image list --format=json                                                                                                                                                                                                    │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ pause   │ -p newest-cni-086099 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ delete  │ -p newest-cni-086099                                                                                                                                                                                                                          │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ delete  │ -p newest-cni-086099                                                                                                                                                                                                                          │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ image   │ embed-certs-719574 image list --format=json                                                                                                                                                                                                   │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ pause   │ -p embed-certs-719574 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:36:31
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:36:31.193182  388420 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:36:31.193281  388420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:36:31.193289  388420 out.go:374] Setting ErrFile to fd 2...
	I1115 10:36:31.193293  388420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:36:31.193515  388420 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 10:36:31.193933  388420 out.go:368] Setting JSON to false
	I1115 10:36:31.195111  388420 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8328,"bootTime":1763194663,"procs":266,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 10:36:31.195216  388420 start.go:143] virtualization: kvm guest
	I1115 10:36:31.196894  388420 out.go:179] * [default-k8s-diff-port-026691] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 10:36:31.198076  388420 notify.go:221] Checking for updates...
	I1115 10:36:31.198087  388420 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 10:36:31.199249  388420 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:36:31.200471  388420 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:36:31.201512  388420 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-55448/.minikube
	I1115 10:36:31.202449  388420 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 10:36:31.203634  388420 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:36:31.205205  388420 config.go:182] Loaded profile config "default-k8s-diff-port-026691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:31.205718  388420 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:36:31.228892  388420 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 10:36:31.229044  388420 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:36:31.285898  388420 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:67 SystemTime:2025-11-15 10:36:31.276283811 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:36:31.286032  388420 docker.go:319] overlay module found
	I1115 10:36:31.287655  388420 out.go:179] * Using the docker driver based on existing profile
	I1115 10:36:31.288859  388420 start.go:309] selected driver: docker
	I1115 10:36:31.288877  388420 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-026691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-026691 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:36:31.288972  388420 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:36:31.289812  388420 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:36:31.352009  388420 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:66 SystemTime:2025-11-15 10:36:31.342104199 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:36:31.352371  388420 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:36:31.352408  388420 cni.go:84] Creating CNI manager for ""
	I1115 10:36:31.352457  388420 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:36:31.352498  388420 start.go:353] cluster config:
	{Name:default-k8s-diff-port-026691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-026691 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:36:31.354418  388420 out.go:179] * Starting "default-k8s-diff-port-026691" primary control-plane node in "default-k8s-diff-port-026691" cluster
	I1115 10:36:31.355595  388420 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:36:31.356825  388420 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:36:31.357856  388420 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:36:31.357890  388420 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1115 10:36:31.357905  388420 cache.go:65] Caching tarball of preloaded images
	I1115 10:36:31.357944  388420 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:36:31.358020  388420 preload.go:238] Found /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 10:36:31.358036  388420 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:36:31.358136  388420 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/config.json ...
	I1115 10:36:31.378843  388420 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:36:31.378864  388420 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:36:31.378881  388420 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:36:31.378904  388420 start.go:360] acquireMachinesLock for default-k8s-diff-port-026691: {Name:mk1f3196dd9a24a043fa707553211d0b0ea8c1f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:36:31.378986  388420 start.go:364] duration metric: took 61.257µs to acquireMachinesLock for "default-k8s-diff-port-026691"
	I1115 10:36:31.379010  388420 start.go:96] Skipping create...Using existing machine configuration
	I1115 10:36:31.379018  388420 fix.go:54] fixHost starting: 
	I1115 10:36:31.379252  388420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:36:31.397025  388420 fix.go:112] recreateIfNeeded on default-k8s-diff-port-026691: state=Stopped err=<nil>
	W1115 10:36:31.397068  388420 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 10:36:29.135135  387591 out.go:252] * Restarting existing docker container for "newest-cni-086099" ...
	I1115 10:36:29.135222  387591 cli_runner.go:164] Run: docker start newest-cni-086099
	I1115 10:36:29.412428  387591 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:36:29.431258  387591 kic.go:430] container "newest-cni-086099" state is running.
	I1115 10:36:29.431760  387591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-086099
	I1115 10:36:29.450271  387591 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/config.json ...
	I1115 10:36:29.450487  387591 machine.go:94] provisionDockerMachine start ...
	I1115 10:36:29.450542  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:29.468796  387591 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:29.469141  387591 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1115 10:36:29.469158  387591 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:36:29.469768  387591 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43374->127.0.0.1:33129: read: connection reset by peer
	I1115 10:36:32.597021  387591 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-086099
	
	I1115 10:36:32.597063  387591 ubuntu.go:182] provisioning hostname "newest-cni-086099"
	I1115 10:36:32.597140  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:32.616934  387591 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:32.617209  387591 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1115 10:36:32.617233  387591 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-086099 && echo "newest-cni-086099" | sudo tee /etc/hostname
	I1115 10:36:32.756237  387591 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-086099
	
	I1115 10:36:32.756329  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:32.775168  387591 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:32.775389  387591 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1115 10:36:32.775405  387591 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-086099' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-086099/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-086099' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:36:32.902668  387591 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:36:32.902701  387591 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-55448/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-55448/.minikube}
	I1115 10:36:32.902736  387591 ubuntu.go:190] setting up certificates
	I1115 10:36:32.902754  387591 provision.go:84] configureAuth start
	I1115 10:36:32.902811  387591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-086099
	I1115 10:36:32.921923  387591 provision.go:143] copyHostCerts
	I1115 10:36:32.922017  387591 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem, removing ...
	I1115 10:36:32.922035  387591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem
	I1115 10:36:32.922102  387591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem (1082 bytes)
	I1115 10:36:32.922216  387591 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem, removing ...
	I1115 10:36:32.922225  387591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem
	I1115 10:36:32.922253  387591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem (1123 bytes)
	I1115 10:36:32.922341  387591 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem, removing ...
	I1115 10:36:32.922348  387591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem
	I1115 10:36:32.922372  387591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem (1679 bytes)
	I1115 10:36:32.922421  387591 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem org=jenkins.newest-cni-086099 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-086099]
	I1115 10:36:32.940854  387591 provision.go:177] copyRemoteCerts
	I1115 10:36:32.940914  387591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:36:32.940948  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:32.958931  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:33.053731  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:36:33.071243  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:36:33.088651  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1115 10:36:33.105219  387591 provision.go:87] duration metric: took 202.453369ms to configureAuth
	I1115 10:36:33.105244  387591 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:36:33.105414  387591 config.go:182] Loaded profile config "newest-cni-086099": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:33.105509  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:33.123012  387591 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:33.123259  387591 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1115 10:36:33.123277  387591 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:36:33.389799  387591 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:36:33.389822  387591 machine.go:97] duration metric: took 3.93932207s to provisionDockerMachine
	I1115 10:36:33.389835  387591 start.go:293] postStartSetup for "newest-cni-086099" (driver="docker")
	I1115 10:36:33.389844  387591 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:36:33.389903  387591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:36:33.389946  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:33.409403  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:33.503330  387591 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:36:33.506790  387591 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:36:33.506815  387591 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:36:33.506825  387591 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/addons for local assets ...
	I1115 10:36:33.506878  387591 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/files for local assets ...
	I1115 10:36:33.506995  387591 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem -> 589622.pem in /etc/ssl/certs
	I1115 10:36:33.507126  387591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:36:33.514570  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:36:33.531880  387591 start.go:296] duration metric: took 142.028023ms for postStartSetup
	I1115 10:36:33.532012  387591 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:36:33.532066  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:33.549908  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:33.640348  387591 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:36:33.645124  387591 fix.go:56] duration metric: took 4.529931109s for fixHost
	I1115 10:36:33.645164  387591 start.go:83] releasing machines lock for "newest-cni-086099", held for 4.529982501s
	I1115 10:36:33.645246  387591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-086099
	I1115 10:36:33.663364  387591 ssh_runner.go:195] Run: cat /version.json
	I1115 10:36:33.663400  387591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:36:33.663445  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:33.663461  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:33.682200  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:33.682521  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:33.827221  387591 ssh_runner.go:195] Run: systemctl --version
	I1115 10:36:33.834019  387591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:36:33.868151  387591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:36:33.872995  387591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:36:33.873067  387591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:36:33.881540  387591 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 10:36:33.881563  387591 start.go:496] detecting cgroup driver to use...
	I1115 10:36:33.881595  387591 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:36:33.881628  387591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:36:33.895704  387591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:36:33.907633  387591 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:36:33.907681  387591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:36:33.921408  387591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	W1115 10:36:30.745845  377744 pod_ready.go:104] pod "coredns-66bc5c9577-fjzk5" is not "Ready", error: <nil>
	W1115 10:36:32.746544  377744 pod_ready.go:104] pod "coredns-66bc5c9577-fjzk5" is not "Ready", error: <nil>
	I1115 10:36:33.933689  387591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:36:34.015025  387591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:36:34.097166  387591 docker.go:234] disabling docker service ...
	I1115 10:36:34.097250  387591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:36:34.111501  387591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:36:34.123898  387591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:36:34.208076  387591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:36:34.289077  387591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:36:34.302010  387591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:36:34.316333  387591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:36:34.316409  387591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.325113  387591 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:36:34.325175  387591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.333844  387591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.342343  387591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.350817  387591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:36:34.359269  387591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.368008  387591 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.376100  387591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.384822  387591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:36:34.392091  387591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:36:34.399149  387591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:34.478616  387591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:36:34.580323  387591 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:36:34.580408  387591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:36:34.584509  387591 start.go:564] Will wait 60s for crictl version
	I1115 10:36:34.584568  387591 ssh_runner.go:195] Run: which crictl
	I1115 10:36:34.588078  387591 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:36:34.613070  387591 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:36:34.613150  387591 ssh_runner.go:195] Run: crio --version
	I1115 10:36:34.641080  387591 ssh_runner.go:195] Run: crio --version
	I1115 10:36:34.670335  387591 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:36:34.671690  387591 cli_runner.go:164] Run: docker network inspect newest-cni-086099 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:36:34.689678  387591 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1115 10:36:34.693973  387591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:36:34.705342  387591 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1115 10:36:31.398937  388420 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-026691" ...
	I1115 10:36:31.399016  388420 cli_runner.go:164] Run: docker start default-k8s-diff-port-026691
	I1115 10:36:31.676189  388420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:36:31.694382  388420 kic.go:430] container "default-k8s-diff-port-026691" state is running.
	I1115 10:36:31.694751  388420 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-026691
	I1115 10:36:31.713425  388420 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/config.json ...
	I1115 10:36:31.713652  388420 machine.go:94] provisionDockerMachine start ...
	I1115 10:36:31.713746  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:31.732991  388420 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:31.733252  388420 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1115 10:36:31.733277  388420 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:36:31.734038  388420 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45950->127.0.0.1:33134: read: connection reset by peer
	I1115 10:36:34.867843  388420 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-026691
	
	I1115 10:36:34.867883  388420 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-026691"
	I1115 10:36:34.868072  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:34.887800  388420 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:34.888079  388420 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1115 10:36:34.888098  388420 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-026691 && echo "default-k8s-diff-port-026691" | sudo tee /etc/hostname
	I1115 10:36:35.027312  388420 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-026691
	
	I1115 10:36:35.027402  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:35.049307  388420 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:35.049620  388420 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1115 10:36:35.049653  388420 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-026691' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-026691/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-026691' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:36:35.185792  388420 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:36:35.185824  388420 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-55448/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-55448/.minikube}
	I1115 10:36:35.185877  388420 ubuntu.go:190] setting up certificates
	I1115 10:36:35.185889  388420 provision.go:84] configureAuth start
	I1115 10:36:35.185975  388420 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-026691
	I1115 10:36:35.205215  388420 provision.go:143] copyHostCerts
	I1115 10:36:35.205302  388420 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem, removing ...
	I1115 10:36:35.205325  388420 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem
	I1115 10:36:35.205419  388420 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem (1082 bytes)
	I1115 10:36:35.205578  388420 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem, removing ...
	I1115 10:36:35.205600  388420 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem
	I1115 10:36:35.205648  388420 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem (1123 bytes)
	I1115 10:36:35.205811  388420 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem, removing ...
	I1115 10:36:35.205831  388420 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem
	I1115 10:36:35.205877  388420 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem (1679 bytes)
	I1115 10:36:35.205988  388420 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-026691 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-026691 localhost minikube]
	I1115 10:36:35.356382  388420 provision.go:177] copyRemoteCerts
	I1115 10:36:35.356441  388420 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:36:35.356476  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:35.375752  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:35.470476  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:36:35.488150  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1115 10:36:35.505264  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:36:35.522854  388420 provision.go:87] duration metric: took 336.947608ms to configureAuth
	I1115 10:36:35.522880  388420 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:36:35.523120  388420 config.go:182] Loaded profile config "default-k8s-diff-port-026691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:35.523282  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:35.543167  388420 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:35.543480  388420 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1115 10:36:35.543509  388420 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:36:35.848476  388420 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:36:35.848509  388420 machine.go:97] duration metric: took 4.134839636s to provisionDockerMachine
	I1115 10:36:35.848525  388420 start.go:293] postStartSetup for "default-k8s-diff-port-026691" (driver="docker")
	I1115 10:36:35.848541  388420 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:36:35.848616  388420 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:36:35.848671  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:35.868537  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:35.963605  388420 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:36:35.967175  388420 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:36:35.967199  388420 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:36:35.967209  388420 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/addons for local assets ...
	I1115 10:36:35.967263  388420 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/files for local assets ...
	I1115 10:36:35.967339  388420 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem -> 589622.pem in /etc/ssl/certs
	I1115 10:36:35.967422  388420 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:36:35.975404  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:36:35.992754  388420 start.go:296] duration metric: took 144.211835ms for postStartSetup
	I1115 10:36:35.992851  388420 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:36:35.992902  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:36.010853  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:36.106652  388420 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:36:36.111301  388420 fix.go:56] duration metric: took 4.732276816s for fixHost
	I1115 10:36:36.111327  388420 start.go:83] releasing machines lock for "default-k8s-diff-port-026691", held for 4.732326241s
	I1115 10:36:36.111401  388420 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-026691
	I1115 10:36:36.133087  388420 ssh_runner.go:195] Run: cat /version.json
	I1115 10:36:36.133147  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:36.133224  388420 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:36:36.133295  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:36.161597  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:36.162169  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:34.706341  387591 kubeadm.go:884] updating cluster {Name:newest-cni-086099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-086099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:36:34.706463  387591 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:36:34.706520  387591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:36:34.737832  387591 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:36:34.737871  387591 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:36:34.737929  387591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:36:34.765628  387591 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:36:34.765650  387591 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:36:34.765657  387591 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1115 10:36:34.765750  387591 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-086099 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-086099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:36:34.765813  387591 ssh_runner.go:195] Run: crio config
	I1115 10:36:34.812764  387591 cni.go:84] Creating CNI manager for ""
	I1115 10:36:34.812787  387591 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:36:34.812806  387591 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1115 10:36:34.812836  387591 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-086099 NodeName:newest-cni-086099 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:36:34.813018  387591 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-086099"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:36:34.813097  387591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:36:34.821514  387591 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:36:34.821582  387591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:36:34.829425  387591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1115 10:36:34.841803  387591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:36:34.854099  387591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1115 10:36:34.867123  387591 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:36:34.871300  387591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:36:34.882157  387591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:34.965624  387591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:36:34.991396  387591 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099 for IP: 192.168.103.2
	I1115 10:36:34.991421  387591 certs.go:195] generating shared ca certs ...
	I1115 10:36:34.991442  387591 certs.go:227] acquiring lock for ca certs: {Name:mkec8c0ab2b564f4fafe1f7fc86089efa994f6db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:34.991611  387591 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key
	I1115 10:36:34.991670  387591 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key
	I1115 10:36:34.991685  387591 certs.go:257] generating profile certs ...
	I1115 10:36:34.991800  387591 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/client.key
	I1115 10:36:34.991881  387591 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.key.d719cdad
	I1115 10:36:34.991938  387591 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.key
	I1115 10:36:34.992114  387591 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem (1338 bytes)
	W1115 10:36:34.992160  387591 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962_empty.pem, impossibly tiny 0 bytes
	I1115 10:36:34.992182  387591 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:36:34.992223  387591 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:36:34.992266  387591 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:36:34.992298  387591 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem (1679 bytes)
	I1115 10:36:34.992360  387591 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:36:34.993060  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:36:35.012346  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:36:35.032525  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:36:35.052616  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:36:35.116969  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1115 10:36:35.141400  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 10:36:35.160318  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:36:35.178367  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:36:35.231343  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /usr/share/ca-certificates/589622.pem (1708 bytes)
	I1115 10:36:35.251073  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:36:35.269574  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem --> /usr/share/ca-certificates/58962.pem (1338 bytes)
	I1115 10:36:35.287839  387591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:36:35.300609  387591 ssh_runner.go:195] Run: openssl version
	I1115 10:36:35.306757  387591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/58962.pem && ln -fs /usr/share/ca-certificates/58962.pem /etc/ssl/certs/58962.pem"
	I1115 10:36:35.315111  387591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/58962.pem
	I1115 10:36:35.318673  387591 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:47 /usr/share/ca-certificates/58962.pem
	I1115 10:36:35.318726  387591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/58962.pem
	I1115 10:36:35.352595  387591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/58962.pem /etc/ssl/certs/51391683.0"
	I1115 10:36:35.360661  387591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/589622.pem && ln -fs /usr/share/ca-certificates/589622.pem /etc/ssl/certs/589622.pem"
	I1115 10:36:35.369044  387591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/589622.pem
	I1115 10:36:35.373102  387591 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:47 /usr/share/ca-certificates/589622.pem
	I1115 10:36:35.373149  387591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/589622.pem
	I1115 10:36:35.407763  387591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/589622.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:36:35.416805  387591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:36:35.426105  387591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:35.429879  387591 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:41 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:35.429928  387591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:35.464376  387591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:36:35.472689  387591 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:36:35.476537  387591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 10:36:35.513422  387591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 10:36:35.552107  387591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 10:36:35.627892  387591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 10:36:35.738207  387591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 10:36:35.927631  387591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 10:36:36.020791  387591 kubeadm.go:401] StartCluster: {Name:newest-cni-086099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-086099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:36:36.020915  387591 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:36:36.020993  387591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:36:36.054712  387591 cri.go:89] found id: "38ec6363bcab12e98490c10ae199bee3e96e76c3b252eb09eaacbceb79f4fee3"
	I1115 10:36:36.054741  387591 cri.go:89] found id: "dcddb7cd9963badc91f7602efc0c7dd3a6aa66928df5288774cb992a0f211b2c"
	I1115 10:36:36.054748  387591 cri.go:89] found id: "938d8a7a407d113893b3543524a1f018292ab3c06ad38c37347f5c09b4f19aed"
	I1115 10:36:36.054753  387591 cri.go:89] found id: "6799daac297c17fbe94a59a4b23a00cc23bdfe7671f3f31f803b074be4ef25a5"
	I1115 10:36:36.054758  387591 cri.go:89] found id: ""
	I1115 10:36:36.054810  387591 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 10:36:36.122342  387591 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:36:36Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:36:36.122434  387591 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:36:36.132788  387591 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 10:36:36.132807  387591 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 10:36:36.132853  387591 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 10:36:36.144175  387591 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:36:36.145209  387591 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-086099" does not appear in /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:36:36.145870  387591 kubeconfig.go:62] /home/jenkins/minikube-integration/21894-55448/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-086099" cluster setting kubeconfig missing "newest-cni-086099" context setting]
	I1115 10:36:36.146847  387591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:36.149871  387591 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 10:36:36.217177  387591 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1115 10:36:36.217217  387591 kubeadm.go:602] duration metric: took 84.40299ms to restartPrimaryControlPlane
	I1115 10:36:36.217231  387591 kubeadm.go:403] duration metric: took 196.454161ms to StartCluster
	I1115 10:36:36.217253  387591 settings.go:142] acquiring lock: {Name:mk94cfac0f6eef2479181468a7a8082a6e5c8f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:36.217343  387591 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:36:36.218632  387591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:36.218872  387591 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:36:36.218972  387591 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:36:36.219074  387591 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-086099"
	I1115 10:36:36.219094  387591 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-086099"
	W1115 10:36:36.219105  387591 addons.go:248] addon storage-provisioner should already be in state true
	I1115 10:36:36.219138  387591 host.go:66] Checking if "newest-cni-086099" exists ...
	I1115 10:36:36.219158  387591 addons.go:70] Setting dashboard=true in profile "newest-cni-086099"
	I1115 10:36:36.219163  387591 config.go:182] Loaded profile config "newest-cni-086099": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:36.219193  387591 addons.go:239] Setting addon dashboard=true in "newest-cni-086099"
	W1115 10:36:36.219202  387591 addons.go:248] addon dashboard should already be in state true
	I1115 10:36:36.219217  387591 addons.go:70] Setting default-storageclass=true in profile "newest-cni-086099"
	I1115 10:36:36.219235  387591 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-086099"
	I1115 10:36:36.219248  387591 host.go:66] Checking if "newest-cni-086099" exists ...
	I1115 10:36:36.219557  387591 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:36:36.219712  387591 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:36:36.219712  387591 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:36:36.220680  387591 out.go:179] * Verifying Kubernetes components...
	I1115 10:36:36.221665  387591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:36.248161  387591 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:36:36.248172  387591 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1115 10:36:36.249608  387591 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:36:36.249628  387591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:36:36.249683  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:36.249733  387591 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1115 10:36:36.324481  388420 ssh_runner.go:195] Run: systemctl --version
	I1115 10:36:36.336623  388420 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:36:36.372576  388420 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:36:36.377572  388420 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:36:36.377633  388420 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:36:36.385687  388420 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 10:36:36.385710  388420 start.go:496] detecting cgroup driver to use...
	I1115 10:36:36.385740  388420 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:36:36.385776  388420 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:36:36.399728  388420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:36:36.411622  388420 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:36:36.411694  388420 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:36:36.431786  388420 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:36:36.449270  388420 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:36:36.538378  388420 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:36:36.622459  388420 docker.go:234] disabling docker service ...
	I1115 10:36:36.622563  388420 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:36:36.644022  388420 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:36:36.656349  388420 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:36:36.757453  388420 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:36:36.851752  388420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:36:36.864024  388420 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:36:36.878189  388420 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:36:36.878243  388420 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.886869  388420 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:36:36.886944  388420 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.895649  388420 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.904129  388420 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.912660  388420 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:36:36.922601  388420 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.934730  388420 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.945527  388420 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.955227  388420 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:36:36.962702  388420 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:36:36.969927  388420 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:37.064102  388420 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:36:37.181392  388420 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:36:37.181469  388420 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:36:37.185705  388420 start.go:564] Will wait 60s for crictl version
	I1115 10:36:37.185759  388420 ssh_runner.go:195] Run: which crictl
	I1115 10:36:37.189374  388420 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:36:37.214797  388420 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:36:37.214872  388420 ssh_runner.go:195] Run: crio --version
	I1115 10:36:37.247024  388420 ssh_runner.go:195] Run: crio --version
	I1115 10:36:37.283127  388420 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1115 10:36:35.246243  377744 pod_ready.go:104] pod "coredns-66bc5c9577-fjzk5" is not "Ready", error: <nil>
	I1115 10:36:37.246256  377744 pod_ready.go:94] pod "coredns-66bc5c9577-fjzk5" is "Ready"
	I1115 10:36:37.246283  377744 pod_ready.go:86] duration metric: took 33.505674032s for pod "coredns-66bc5c9577-fjzk5" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.248931  377744 pod_ready.go:83] waiting for pod "etcd-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.253449  377744 pod_ready.go:94] pod "etcd-embed-certs-719574" is "Ready"
	I1115 10:36:37.253477  377744 pod_ready.go:86] duration metric: took 4.523106ms for pod "etcd-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.258749  377744 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.262996  377744 pod_ready.go:94] pod "kube-apiserver-embed-certs-719574" is "Ready"
	I1115 10:36:37.263019  377744 pod_ready.go:86] duration metric: took 4.2473ms for pod "kube-apiserver-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.265400  377744 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.444138  377744 pod_ready.go:94] pod "kube-controller-manager-embed-certs-719574" is "Ready"
	I1115 10:36:37.444168  377744 pod_ready.go:86] duration metric: took 178.743562ms for pod "kube-controller-manager-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.644722  377744 pod_ready.go:83] waiting for pod "kube-proxy-kmc8c" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:38.044247  377744 pod_ready.go:94] pod "kube-proxy-kmc8c" is "Ready"
	I1115 10:36:38.044277  377744 pod_ready.go:86] duration metric: took 399.527336ms for pod "kube-proxy-kmc8c" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:38.245350  377744 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:38.644894  377744 pod_ready.go:94] pod "kube-scheduler-embed-certs-719574" is "Ready"
	I1115 10:36:38.645014  377744 pod_ready.go:86] duration metric: took 399.62796ms for pod "kube-scheduler-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:38.645030  377744 pod_ready.go:40] duration metric: took 34.90782271s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:36:38.702511  377744 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:36:38.706562  377744 out.go:179] * Done! kubectl is now configured to use "embed-certs-719574" cluster and "default" namespace by default
	I1115 10:36:37.284492  388420 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-026691 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:36:37.302095  388420 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1115 10:36:37.306321  388420 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:36:37.316768  388420 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-026691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-026691 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:36:37.316911  388420 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:36:37.316980  388420 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:36:37.354039  388420 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:36:37.354063  388420 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:36:37.354121  388420 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:36:37.384223  388420 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:36:37.384249  388420 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:36:37.384257  388420 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1115 10:36:37.384353  388420 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-026691 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-026691 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:36:37.384416  388420 ssh_runner.go:195] Run: crio config
	I1115 10:36:37.429588  388420 cni.go:84] Creating CNI manager for ""
	I1115 10:36:37.429616  388420 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:36:37.429637  388420 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:36:37.429663  388420 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-026691 NodeName:default-k8s-diff-port-026691 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:36:37.429840  388420 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-026691"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:36:37.429922  388420 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:36:37.438488  388420 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:36:37.438583  388420 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:36:37.446984  388420 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1115 10:36:37.459608  388420 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:36:37.472652  388420 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1115 10:36:37.484924  388420 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:36:37.488541  388420 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:36:37.498126  388420 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:37.587175  388420 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:36:37.609456  388420 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691 for IP: 192.168.85.2
	I1115 10:36:37.609480  388420 certs.go:195] generating shared ca certs ...
	I1115 10:36:37.609501  388420 certs.go:227] acquiring lock for ca certs: {Name:mkec8c0ab2b564f4fafe1f7fc86089efa994f6db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:37.609671  388420 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key
	I1115 10:36:37.609735  388420 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key
	I1115 10:36:37.609750  388420 certs.go:257] generating profile certs ...
	I1115 10:36:37.609859  388420 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/client.key
	I1115 10:36:37.609921  388420 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.key.f8824eec
	I1115 10:36:37.610007  388420 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.key
	I1115 10:36:37.610146  388420 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem (1338 bytes)
	W1115 10:36:37.610198  388420 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962_empty.pem, impossibly tiny 0 bytes
	I1115 10:36:37.610212  388420 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:36:37.610244  388420 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:36:37.610278  388420 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:36:37.610306  388420 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem (1679 bytes)
	I1115 10:36:37.610359  388420 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:36:37.611122  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:36:37.629925  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:36:37.650833  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:36:37.671862  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:36:37.696427  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1115 10:36:37.763348  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 10:36:37.782654  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:36:37.800720  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:36:37.817628  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:36:37.835327  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem --> /usr/share/ca-certificates/58962.pem (1338 bytes)
	I1115 10:36:37.856769  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /usr/share/ca-certificates/589622.pem (1708 bytes)
	I1115 10:36:37.876039  388420 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:36:37.891255  388420 ssh_runner.go:195] Run: openssl version
	I1115 10:36:37.898994  388420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/58962.pem && ln -fs /usr/share/ca-certificates/58962.pem /etc/ssl/certs/58962.pem"
	I1115 10:36:37.907571  388420 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/58962.pem
	I1115 10:36:37.912280  388420 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:47 /usr/share/ca-certificates/58962.pem
	I1115 10:36:37.912337  388420 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/58962.pem
	I1115 10:36:37.950692  388420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/58962.pem /etc/ssl/certs/51391683.0"
	I1115 10:36:37.959456  388420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/589622.pem && ln -fs /usr/share/ca-certificates/589622.pem /etc/ssl/certs/589622.pem"
	I1115 10:36:37.968450  388420 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/589622.pem
	I1115 10:36:37.972465  388420 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:47 /usr/share/ca-certificates/589622.pem
	I1115 10:36:37.972521  388420 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/589622.pem
	I1115 10:36:38.008129  388420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/589622.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:36:38.016745  388420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:36:38.027414  388420 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:38.031718  388420 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:41 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:38.031792  388420 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:38.077405  388420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:36:38.086004  388420 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:36:38.089990  388420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 10:36:38.127939  388420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 10:36:38.181791  388420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 10:36:38.256153  388420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 10:36:38.368577  388420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 10:36:38.543333  388420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 10:36:38.645754  388420 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-026691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-026691 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:36:38.645863  388420 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:36:38.645935  388420 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:36:38.685210  388420 cri.go:89] found id: "b04411b3a023360282fda35601339f2000829b38e465e5c4c130b7c58e111bb3"
	I1115 10:36:38.685237  388420 cri.go:89] found id: "971b8e4c2073b59c0dd66594f19cfc1d0b9f81cb1bad24b21946d5ea7b012768"
	I1115 10:36:38.685254  388420 cri.go:89] found id: "6a1db649ea51d34c5661db65312d1c8660828376983f865ce0b3d2801b219c2d"
	I1115 10:36:38.685259  388420 cri.go:89] found id: "58595dd2cf4ce1cb8f740a6594f3ee7a3c7d2587682b4e6c526266c0067303a7"
	I1115 10:36:38.685262  388420 cri.go:89] found id: ""
	I1115 10:36:38.685312  388420 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 10:36:38.750674  388420 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:36:38Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:36:38.750744  388420 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:36:38.769157  388420 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 10:36:38.769186  388420 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 10:36:38.769238  388420 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 10:36:38.842499  388420 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:36:38.845337  388420 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-026691" does not appear in /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:36:38.846840  388420 kubeconfig.go:62] /home/jenkins/minikube-integration/21894-55448/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-026691" cluster setting kubeconfig missing "default-k8s-diff-port-026691" context setting]
	I1115 10:36:38.849516  388420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:38.855210  388420 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 10:36:38.870026  388420 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1115 10:36:38.870059  388420 kubeadm.go:602] duration metric: took 100.86647ms to restartPrimaryControlPlane
	I1115 10:36:38.870073  388420 kubeadm.go:403] duration metric: took 224.328768ms to StartCluster
	I1115 10:36:38.870094  388420 settings.go:142] acquiring lock: {Name:mk94cfac0f6eef2479181468a7a8082a6e5c8f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:38.870172  388420 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:36:38.872536  388420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:38.872812  388420 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:36:38.873059  388420 config.go:182] Loaded profile config "default-k8s-diff-port-026691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:38.873024  388420 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:36:38.873181  388420 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-026691"
	I1115 10:36:38.873220  388420 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-026691"
	W1115 10:36:38.873240  388420 addons.go:248] addon storage-provisioner should already be in state true
	I1115 10:36:38.873315  388420 host.go:66] Checking if "default-k8s-diff-port-026691" exists ...
	I1115 10:36:38.873258  388420 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-026691"
	I1115 10:36:38.873640  388420 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-026691"
	W1115 10:36:38.873663  388420 addons.go:248] addon dashboard should already be in state true
	I1115 10:36:38.873444  388420 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-026691"
	I1115 10:36:38.873728  388420 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-026691"
	I1115 10:36:38.873753  388420 host.go:66] Checking if "default-k8s-diff-port-026691" exists ...
	I1115 10:36:38.874091  388420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:36:38.874589  388420 out.go:179] * Verifying Kubernetes components...
	I1115 10:36:38.874818  388420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:36:38.875168  388420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:36:38.876706  388420 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:38.907308  388420 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:36:38.907363  388420 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-026691"
	W1115 10:36:38.907464  388420 addons.go:248] addon default-storageclass should already be in state true
	I1115 10:36:38.907503  388420 host.go:66] Checking if "default-k8s-diff-port-026691" exists ...
	I1115 10:36:38.908043  388420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:36:38.912208  388420 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:36:38.912236  388420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:36:38.912295  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:38.915346  388420 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1115 10:36:38.916793  388420 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1115 10:36:36.250323  387591 addons.go:239] Setting addon default-storageclass=true in "newest-cni-086099"
	W1115 10:36:36.250350  387591 addons.go:248] addon default-storageclass should already be in state true
	I1115 10:36:36.250389  387591 host.go:66] Checking if "newest-cni-086099" exists ...
	I1115 10:36:36.251476  387591 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:36:36.255103  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1115 10:36:36.255128  387591 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1115 10:36:36.255190  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:36.278537  387591 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:36:36.278565  387591 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:36:36.278644  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:36.280814  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:36.281721  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:36.296440  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:36.630526  387591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:36:36.633566  387591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:36:36.636633  387591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:36:36.638099  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1115 10:36:36.638116  387591 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1115 10:36:36.724472  387591 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:36:36.724559  387591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:36:36.729948  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1115 10:36:36.730015  387591 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1115 10:36:36.826253  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1115 10:36:36.826282  387591 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1115 10:36:36.843537  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1115 10:36:36.843560  387591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1115 10:36:36.931895  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1115 10:36:36.931924  387591 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1115 10:36:36.945766  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1115 10:36:36.945791  387591 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1115 10:36:37.023562  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1115 10:36:37.023593  387591 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1115 10:36:37.038918  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1115 10:36:37.038944  387591 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1115 10:36:37.052909  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:36:37.052937  387591 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1115 10:36:37.119950  387591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:36:39.816288  387591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.182684264s)
	I1115 10:36:40.959315  387591 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.234727667s)
	I1115 10:36:40.959363  387591 api_server.go:72] duration metric: took 4.740464162s to wait for apiserver process to appear ...
	I1115 10:36:40.959371  387591 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:36:40.959395  387591 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1115 10:36:40.959325  387591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.322653976s)
	I1115 10:36:40.959440  387591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.839423734s)
	I1115 10:36:40.962518  387591 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-086099 addons enable metrics-server
	
	I1115 10:36:40.964092  387591 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1115 10:36:38.917819  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1115 10:36:38.917851  388420 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1115 10:36:38.917924  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:38.930932  388420 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:36:38.930982  388420 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:36:38.931053  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:38.933702  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:38.939670  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:38.960258  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:39.257807  388420 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:36:39.264707  388420 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:36:39.270235  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1115 10:36:39.270261  388420 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1115 10:36:39.274532  388420 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-026691" to be "Ready" ...
	I1115 10:36:39.351682  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1115 10:36:39.351725  388420 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1115 10:36:39.357989  388420 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:36:39.374984  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1115 10:36:39.375011  388420 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1115 10:36:39.457352  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1115 10:36:39.457377  388420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1115 10:36:39.542591  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1115 10:36:39.542618  388420 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1115 10:36:39.565925  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1115 10:36:39.566041  388420 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1115 10:36:39.580123  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1115 10:36:39.580242  388420 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1115 10:36:39.655102  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1115 10:36:39.655149  388420 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1115 10:36:39.669218  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:36:39.669246  388420 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1115 10:36:39.683183  388420 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:36:40.965416  387591 addons.go:515] duration metric: took 4.746465999s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1115 10:36:40.965454  387591 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:36:40.965477  387591 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:36:41.460167  387591 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1115 10:36:41.465475  387591 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1115 10:36:41.466642  387591 api_server.go:141] control plane version: v1.34.1
	I1115 10:36:41.466668  387591 api_server.go:131] duration metric: took 507.289044ms to wait for apiserver health ...
	I1115 10:36:41.466679  387591 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:36:41.470116  387591 system_pods.go:59] 8 kube-system pods found
	I1115 10:36:41.470165  387591 system_pods.go:61] "coredns-66bc5c9577-rblh2" [903029e0-3b15-43f3-836a-884de528cbc9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1115 10:36:41.470180  387591 system_pods.go:61] "etcd-newest-cni-086099" [6768a007-08a6-47b0-9917-cf54f577829b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:36:41.470190  387591 system_pods.go:61] "kindnet-2h7mm" [1b25f4e6-5f26-42ce-8ceb-56003682c785] Running
	I1115 10:36:41.470200  387591 system_pods.go:61] "kube-apiserver-newest-cni-086099" [3ca22829-f679-44bf-94e5-e4a368e13dcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:36:41.470210  387591 system_pods.go:61] "kube-controller-manager-newest-cni-086099" [1f45f32a-2d9e-49c0-9c69-d2aa59324564] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:36:41.470219  387591 system_pods.go:61] "kube-proxy-6jpzt" [7409c19f-472b-4074-81d0-8e43ac2bc9d4] Running
	I1115 10:36:41.470226  387591 system_pods.go:61] "kube-scheduler-newest-cni-086099" [c3510e0f-9b51-4fb5-bc6e-d0e47be8f5ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:36:41.470235  387591 system_pods.go:61] "storage-provisioner" [23166a3f-bb02-48ca-ab00-721c8c46525d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1115 10:36:41.470247  387591 system_pods.go:74] duration metric: took 3.560608ms to wait for pod list to return data ...
	I1115 10:36:41.470262  387591 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:36:41.472726  387591 default_sa.go:45] found service account: "default"
	I1115 10:36:41.472751  387591 default_sa.go:55] duration metric: took 2.478273ms for default service account to be created ...
	I1115 10:36:41.472765  387591 kubeadm.go:587] duration metric: took 5.253867745s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1115 10:36:41.472786  387591 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:36:41.475250  387591 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:36:41.475273  387591 node_conditions.go:123] node cpu capacity is 8
	I1115 10:36:41.475284  387591 node_conditions.go:105] duration metric: took 2.490696ms to run NodePressure ...
	I1115 10:36:41.475297  387591 start.go:242] waiting for startup goroutines ...
	I1115 10:36:41.475306  387591 start.go:247] waiting for cluster config update ...
	I1115 10:36:41.475322  387591 start.go:256] writing updated cluster config ...
	I1115 10:36:41.475622  387591 ssh_runner.go:195] Run: rm -f paused
	I1115 10:36:41.529383  387591 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:36:41.531753  387591 out.go:179] * Done! kubectl is now configured to use "newest-cni-086099" cluster and "default" namespace by default
	I1115 10:36:42.149798  388420 node_ready.go:49] node "default-k8s-diff-port-026691" is "Ready"
	I1115 10:36:42.149832  388420 node_ready.go:38] duration metric: took 2.87526393s for node "default-k8s-diff-port-026691" to be "Ready" ...
	I1115 10:36:42.149851  388420 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:36:42.149915  388420 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:36:43.654191  388420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.38943226s)
	I1115 10:36:43.654229  388420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.29621492s)
	I1115 10:36:43.654402  388420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.971169317s)
	I1115 10:36:43.654437  388420 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.50449925s)
	I1115 10:36:43.654474  388420 api_server.go:72] duration metric: took 4.78163246s to wait for apiserver process to appear ...
	I1115 10:36:43.654482  388420 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:36:43.654504  388420 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1115 10:36:43.655988  388420 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-026691 addons enable metrics-server
	
	I1115 10:36:43.659469  388420 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:36:43.659501  388420 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:36:43.660788  388420 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1115 10:36:43.661827  388420 addons.go:515] duration metric: took 4.788813528s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1115 10:36:44.155099  388420 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1115 10:36:44.160271  388420 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1115 10:36:44.161286  388420 api_server.go:141] control plane version: v1.34.1
	I1115 10:36:44.161316  388420 api_server.go:131] duration metric: took 506.825578ms to wait for apiserver health ...
	I1115 10:36:44.161327  388420 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:36:44.164559  388420 system_pods.go:59] 8 kube-system pods found
	I1115 10:36:44.164606  388420 system_pods.go:61] "coredns-66bc5c9577-5q2j4" [e6c4ca54-e0fe-45ee-88a6-33bdccbb876c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:36:44.164622  388420 system_pods.go:61] "etcd-default-k8s-diff-port-026691" [f8228efa-91a3-41fb-9952-3b4063ddf162] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:36:44.164631  388420 system_pods.go:61] "kindnet-hjdrk" [9e1f7579-f5a2-44cd-b77f-71219cd8827d] Running
	I1115 10:36:44.164645  388420 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-026691" [9da86b3b-bfcd-4c40-ac86-ee9d9faeb9b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:36:44.164658  388420 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-026691" [23d9be87-1de1-4c38-b65e-b084cf0fed25] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:36:44.164667  388420 system_pods.go:61] "kube-proxy-c5bw5" [ee48d34b-ae60-4a03-a7bd-df76e089eebb] Running
	I1115 10:36:44.164677  388420 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-026691" [52b3711f-015c-4af6-8672-58642cbc0c16] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:36:44.164686  388420 system_pods.go:61] "storage-provisioner" [7dedf7a9-415d-4260-b225-7ca171744768] Running
	I1115 10:36:44.164696  388420 system_pods.go:74] duration metric: took 3.356326ms to wait for pod list to return data ...
	I1115 10:36:44.164709  388420 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:36:44.166570  388420 default_sa.go:45] found service account: "default"
	I1115 10:36:44.166593  388420 default_sa.go:55] duration metric: took 1.872347ms for default service account to be created ...
	I1115 10:36:44.166603  388420 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:36:44.169425  388420 system_pods.go:86] 8 kube-system pods found
	I1115 10:36:44.169450  388420 system_pods.go:89] "coredns-66bc5c9577-5q2j4" [e6c4ca54-e0fe-45ee-88a6-33bdccbb876c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:36:44.169459  388420 system_pods.go:89] "etcd-default-k8s-diff-port-026691" [f8228efa-91a3-41fb-9952-3b4063ddf162] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:36:44.169467  388420 system_pods.go:89] "kindnet-hjdrk" [9e1f7579-f5a2-44cd-b77f-71219cd8827d] Running
	I1115 10:36:44.169472  388420 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-026691" [9da86b3b-bfcd-4c40-ac86-ee9d9faeb9b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:36:44.169482  388420 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-026691" [23d9be87-1de1-4c38-b65e-b084cf0fed25] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:36:44.169497  388420 system_pods.go:89] "kube-proxy-c5bw5" [ee48d34b-ae60-4a03-a7bd-df76e089eebb] Running
	I1115 10:36:44.169512  388420 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-026691" [52b3711f-015c-4af6-8672-58642cbc0c16] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:36:44.169521  388420 system_pods.go:89] "storage-provisioner" [7dedf7a9-415d-4260-b225-7ca171744768] Running
	I1115 10:36:44.169532  388420 system_pods.go:126] duration metric: took 2.922555ms to wait for k8s-apps to be running ...
	I1115 10:36:44.169541  388420 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:36:44.169593  388420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:36:44.183310  388420 system_svc.go:56] duration metric: took 13.759187ms WaitForService to wait for kubelet
	I1115 10:36:44.183342  388420 kubeadm.go:587] duration metric: took 5.310501278s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:36:44.183366  388420 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:36:44.186800  388420 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:36:44.186826  388420 node_conditions.go:123] node cpu capacity is 8
	I1115 10:36:44.186843  388420 node_conditions.go:105] duration metric: took 3.463462ms to run NodePressure ...
	I1115 10:36:44.186859  388420 start.go:242] waiting for startup goroutines ...
	I1115 10:36:44.186872  388420 start.go:247] waiting for cluster config update ...
	I1115 10:36:44.186896  388420 start.go:256] writing updated cluster config ...
	I1115 10:36:44.187247  388420 ssh_runner.go:195] Run: rm -f paused
	I1115 10:36:44.191349  388420 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:36:44.194864  388420 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5q2j4" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 10:36:46.200419  388420 pod_ready.go:104] pod "coredns-66bc5c9577-5q2j4" is not "Ready", error: <nil>
	W1115 10:36:48.202278  388420 pod_ready.go:104] pod "coredns-66bc5c9577-5q2j4" is not "Ready", error: <nil>
	W1115 10:36:50.700646  388420 pod_ready.go:104] pod "coredns-66bc5c9577-5q2j4" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 15 10:36:42 embed-certs-719574 crio[683]: time="2025-11-15T10:36:42.955556653Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:36:42 embed-certs-719574 crio[683]: time="2025-11-15T10:36:42.962918816Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:36:42 embed-certs-719574 crio[683]: time="2025-11-15T10:36:42.963019392Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:36:42 embed-certs-719574 crio[683]: time="2025-11-15T10:36:42.963052207Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:36:42 embed-certs-719574 crio[683]: time="2025-11-15T10:36:42.967912794Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:36:42 embed-certs-719574 crio[683]: time="2025-11-15T10:36:42.968355595Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:36:42 embed-certs-719574 crio[683]: time="2025-11-15T10:36:42.96848079Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:36:42 embed-certs-719574 crio[683]: time="2025-11-15T10:36:42.973199948Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:36:42 embed-certs-719574 crio[683]: time="2025-11-15T10:36:42.973226606Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:36:42 embed-certs-719574 crio[683]: time="2025-11-15T10:36:42.973250345Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:36:42 embed-certs-719574 crio[683]: time="2025-11-15T10:36:42.977375806Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:36:42 embed-certs-719574 crio[683]: time="2025-11-15T10:36:42.97781242Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:36:42 embed-certs-719574 crio[683]: time="2025-11-15T10:36:42.977851581Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:36:42 embed-certs-719574 crio[683]: time="2025-11-15T10:36:42.983376268Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:36:42 embed-certs-719574 crio[683]: time="2025-11-15T10:36:42.983404126Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:36:50 embed-certs-719574 crio[683]: time="2025-11-15T10:36:50.883021144Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6370495a-7a2c-4415-ba6c-8042137c8168 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:36:50 embed-certs-719574 crio[683]: time="2025-11-15T10:36:50.884050271Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=205796f1-51f1-424d-b738-80103f7b69e6 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:36:50 embed-certs-719574 crio[683]: time="2025-11-15T10:36:50.885163545Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vknb8/dashboard-metrics-scraper" id=e955ad95-fb48-4a24-b352-e7d7fbd8f3cb name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:36:50 embed-certs-719574 crio[683]: time="2025-11-15T10:36:50.885316374Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:50 embed-certs-719574 crio[683]: time="2025-11-15T10:36:50.893680665Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:50 embed-certs-719574 crio[683]: time="2025-11-15T10:36:50.894469525Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:50 embed-certs-719574 crio[683]: time="2025-11-15T10:36:50.919592624Z" level=info msg="Created container 3ead158324196d73b353abf0dbc9c77d451a7d90a3b992875227cdf4ff71cadf: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vknb8/dashboard-metrics-scraper" id=e955ad95-fb48-4a24-b352-e7d7fbd8f3cb name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:36:50 embed-certs-719574 crio[683]: time="2025-11-15T10:36:50.920230218Z" level=info msg="Starting container: 3ead158324196d73b353abf0dbc9c77d451a7d90a3b992875227cdf4ff71cadf" id=0dca07c5-631c-45fb-94f3-e1b356fdea0e name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:36:50 embed-certs-719574 crio[683]: time="2025-11-15T10:36:50.922228703Z" level=info msg="Started container" PID=1976 containerID=3ead158324196d73b353abf0dbc9c77d451a7d90a3b992875227cdf4ff71cadf description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vknb8/dashboard-metrics-scraper id=0dca07c5-631c-45fb-94f3-e1b356fdea0e name=/runtime.v1.RuntimeService/StartContainer sandboxID=ec80aacc8d40afae47d87261d086b26caf7baaa7028295fb5e42a8808fff30d5
	Nov 15 10:36:50 embed-certs-719574 conmon[1974]: conmon 3ead158324196d73b353 <ninfo>: container 1976 exited with status 1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	3ead158324196       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           5 seconds ago       Exited              dashboard-metrics-scraper   3                   ec80aacc8d40a       dashboard-metrics-scraper-6ffb444bf9-vknb8   kubernetes-dashboard
	fb08cb8a4d59b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         2                   ac40d5f35252f       storage-provisioner                          kube-system
	47d1f78e14958       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago      Exited              dashboard-metrics-scraper   2                   ec80aacc8d40a       dashboard-metrics-scraper-6ffb444bf9-vknb8   kubernetes-dashboard
	6587299f75a35       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   40 seconds ago      Running             kubernetes-dashboard        0                   a5da08dc35ea4       kubernetes-dashboard-855c9754f9-tj9l5        kubernetes-dashboard
	2b7fc8178ede9       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           53 seconds ago      Running             coredns                     1                   2eafe2bafc6d0       coredns-66bc5c9577-fjzk5                     kube-system
	f54cad1c6353f       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   8a7b25048e462       busybox                                      default
	10b6f8a418fda       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         1                   ac40d5f35252f       storage-provisioner                          kube-system
	f676bcf138c32       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           53 seconds ago      Running             kube-proxy                  1                   9d0e5248f3627       kube-proxy-kmc8c                             kube-system
	7fc2fdf9c30a8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 1                   ed7a712085e5d       kindnet-ql2r4                                kube-system
	34a183c86eaa1       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           57 seconds ago      Running             kube-controller-manager     1                   16d81d004d85b       kube-controller-manager-embed-certs-719574   kube-system
	a04037d02e2f1       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           57 seconds ago      Running             kube-scheduler              1                   20213b61f1710       kube-scheduler-embed-certs-719574            kube-system
	d2523d5b7384a       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           57 seconds ago      Running             kube-apiserver              1                   3b4646742c423       kube-apiserver-embed-certs-719574            kube-system
	56627175c47b8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           57 seconds ago      Running             etcd                        1                   bd879d9864c53       etcd-embed-certs-719574                      kube-system
	
	
	==> coredns [2b7fc8178ede99bd2bac3d421353e5930c25042c3ada59734b1a0b0847235087] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40794 - 10169 "HINFO IN 5033773871326012940.3699568236983148320. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.015344083s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-719574
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-719574
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=embed-certs-719574
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_35_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:35:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-719574
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:36:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:36:33 +0000   Sat, 15 Nov 2025 10:34:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:36:33 +0000   Sat, 15 Nov 2025 10:34:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:36:33 +0000   Sat, 15 Nov 2025 10:34:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:36:33 +0000   Sat, 15 Nov 2025 10:35:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-719574
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                4a98aacb-8676-41cf-a57c-20957fa3757b
	  Boot ID:                    6a33f504-3a8c-4d49-8fc5-301cf5275efc
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-fjzk5                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-embed-certs-719574                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-ql2r4                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-embed-certs-719574             250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-embed-certs-719574    200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-kmc8c                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-embed-certs-719574             100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-vknb8    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-tj9l5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 106s                 kube-proxy       
	  Normal   Starting                 53s                  kube-proxy       
	  Warning  CgroupV1                 2m3s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m2s (x9 over 2m3s)  kubelet          Node embed-certs-719574 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m2s (x8 over 2m3s)  kubelet          Node embed-certs-719574 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m2s (x7 over 2m3s)  kubelet          Node embed-certs-719574 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  113s                 kubelet          Node embed-certs-719574 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 113s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    113s                 kubelet          Node embed-certs-719574 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     113s                 kubelet          Node embed-certs-719574 status is now: NodeHasSufficientPID
	  Normal   Starting                 113s                 kubelet          Starting kubelet.
	  Normal   RegisteredNode           109s                 node-controller  Node embed-certs-719574 event: Registered Node embed-certs-719574 in Controller
	  Normal   NodeReady                96s                  kubelet          Node embed-certs-719574 status is now: NodeReady
	  Normal   Starting                 59s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 59s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s (x8 over 59s)    kubelet          Node embed-certs-719574 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s (x8 over 59s)    kubelet          Node embed-certs-719574 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s (x8 over 59s)    kubelet          Node embed-certs-719574 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           51s                  node-controller  Node embed-certs-719574 event: Registered Node embed-certs-719574 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 7f 53 f9 15 93 08 06
	[  +0.017821] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff c6 66 25 e9 45 28 08 06
	[ +19.178294] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 a3 65 b9 28 15 08 06
	[  +0.347929] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 6d df 33 a5 ad 08 06
	[  +0.000319] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff a2 7f 53 f9 15 93 08 06
	[Nov15 10:33] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 96 ed 10 e8 9a 80 08 06
	[  +0.000481] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 a3 65 b9 28 15 08 06
	[ +35.214217] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 c2 9a 56 52 0c 08 06
	[  +9.104720] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 c2 c4 d1 7c dd 08 06
	[Nov15 10:34] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 68 1e 80 53 05 08 06
	[  +0.000444] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 c2 c4 d1 7c dd 08 06
	[ +18.836046] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 98 db 78 4f 0b 08 06
	[  +0.000708] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 c2 9a 56 52 0c 08 06
	
	
	==> etcd [56627175c47b813bf4460ca13a901ec3152cbb4d22f0362e40133db8b19b3e87] <==
	{"level":"warn","ts":"2025-11-15T10:36:00.768874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.775332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.781195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.789438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.847231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.853478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.859797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.865997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.872416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.885803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.897068Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.903095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.909779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.915788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.951289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.959021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.965832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.973884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.980978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.987712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.994606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.043851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.051590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.058822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.066719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53790","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:36:56 up  2:19,  0 user,  load average: 3.29, 4.12, 2.78
	Linux embed-certs-719574 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7fc2fdf9c30a8cc273f623029958f218943ba5c78f1f3342ad2488e439a98294] <==
	I1115 10:36:02.747841       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:36:02.748130       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1115 10:36:02.748319       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:36:02.748336       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:36:02.748358       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:36:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:36:03.049045       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:36:03.049101       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:36:03.049116       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:36:03.049306       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1115 10:36:33.046876       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1115 10:36:33.052824       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1115 10:36:33.053169       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1115 10:36:33.146203       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1115 10:36:34.049575       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:36:34.049606       1 metrics.go:72] Registering metrics
	I1115 10:36:34.050037       1 controller.go:711] "Syncing nftables rules"
	I1115 10:36:42.955126       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1115 10:36:42.955231       1 main.go:301] handling current node
	I1115 10:36:52.961055       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1115 10:36:52.961095       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d2523d5b7384ac5c1a3b025b86aa00148daa68b776dfada0c3e9b0dd751d444c] <==
	I1115 10:36:01.845928       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 10:36:01.846469       1 aggregator.go:171] initial CRD sync complete...
	I1115 10:36:01.846490       1 autoregister_controller.go:144] Starting autoregister controller
	I1115 10:36:01.846497       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 10:36:01.846503       1 cache.go:39] Caches are synced for autoregister controller
	I1115 10:36:01.846666       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1115 10:36:01.846675       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1115 10:36:01.846729       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1115 10:36:01.846796       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1115 10:36:01.848182       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 10:36:01.850996       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1115 10:36:01.851090       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1115 10:36:01.869848       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1115 10:36:01.971096       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 10:36:02.752896       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:36:03.267868       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 10:36:03.370523       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 10:36:03.449568       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:36:03.457880       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:36:03.563667       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.213.75"}
	I1115 10:36:03.575921       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.109.246"}
	I1115 10:36:05.329987       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 10:36:05.578417       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:36:05.628441       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 10:36:05.628441       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [34a183c86eaa1af613ae4708887513840319366cbb4179e7f1678698297eade2] <==
	I1115 10:36:05.174299       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 10:36:05.174362       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1115 10:36:05.174508       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 10:36:05.175235       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1115 10:36:05.175357       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1115 10:36:05.175514       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 10:36:05.175585       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1115 10:36:05.175589       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:36:05.175706       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 10:36:05.175714       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1115 10:36:05.176303       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1115 10:36:05.178542       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1115 10:36:05.178688       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1115 10:36:05.179846       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1115 10:36:05.179940       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1115 10:36:05.180094       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1115 10:36:05.180113       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1115 10:36:05.180182       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1115 10:36:05.180351       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1115 10:36:05.182193       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:36:05.183398       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 10:36:05.185029       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1115 10:36:05.187113       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 10:36:05.194912       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1115 10:36:05.203199       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [f676bcf138c32c1e2f79a1401bcec6579bb0e86468d1bbfa5fa8782637358ec9] <==
	I1115 10:36:02.664777       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:36:02.846298       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:36:02.948784       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:36:02.948900       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1115 10:36:02.949155       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:36:03.052015       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:36:03.052180       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:36:03.061225       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:36:03.061793       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:36:03.061829       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:36:03.063698       1 config.go:200] "Starting service config controller"
	I1115 10:36:03.063769       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:36:03.063816       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:36:03.063823       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:36:03.063838       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:36:03.063843       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:36:03.064333       1 config.go:309] "Starting node config controller"
	I1115 10:36:03.064369       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:36:03.164560       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 10:36:03.164617       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 10:36:03.164648       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 10:36:03.164891       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [a04037d02e2f108f2bc0bc86b223e37c201372e3009a0b55923609e9c3f5b7b5] <==
	I1115 10:35:59.564973       1 serving.go:386] Generated self-signed cert in-memory
	W1115 10:36:01.752886       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1115 10:36:01.753212       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1115 10:36:01.753378       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1115 10:36:01.754418       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1115 10:36:01.865585       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1115 10:36:01.865769       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:36:01.877894       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:36:01.878035       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:36:01.880089       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 10:36:01.880216       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 10:36:01.978192       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 10:36:05 embed-certs-719574 kubelet[844]: I1115 10:36:05.891486     844 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4033c9a9-052a-4725-a759-cefe2f0c9a8a-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-vknb8\" (UID: \"4033c9a9-052a-4725-a759-cefe2f0c9a8a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vknb8"
	Nov 15 10:36:05 embed-certs-719574 kubelet[844]: I1115 10:36:05.891502     844 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pqrc\" (UniqueName: \"kubernetes.io/projected/4033c9a9-052a-4725-a759-cefe2f0c9a8a-kube-api-access-2pqrc\") pod \"dashboard-metrics-scraper-6ffb444bf9-vknb8\" (UID: \"4033c9a9-052a-4725-a759-cefe2f0c9a8a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vknb8"
	Nov 15 10:36:06 embed-certs-719574 kubelet[844]: W1115 10:36:06.125495     844 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/77b854d73395a6882ad846c6f51d5c84a63e9af66dc48bfe4bdf582432e5fa0b/crio-ec80aacc8d40afae47d87261d086b26caf7baaa7028295fb5e42a8808fff30d5 WatchSource:0}: Error finding container ec80aacc8d40afae47d87261d086b26caf7baaa7028295fb5e42a8808fff30d5: Status 404 returned error can't find the container with id ec80aacc8d40afae47d87261d086b26caf7baaa7028295fb5e42a8808fff30d5
	Nov 15 10:36:06 embed-certs-719574 kubelet[844]: W1115 10:36:06.126030     844 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/77b854d73395a6882ad846c6f51d5c84a63e9af66dc48bfe4bdf582432e5fa0b/crio-a5da08dc35ea4e5faf219762cb09085970940a36e653ddd181b6f7a2dbda2fcf WatchSource:0}: Error finding container a5da08dc35ea4e5faf219762cb09085970940a36e653ddd181b6f7a2dbda2fcf: Status 404 returned error can't find the container with id a5da08dc35ea4e5faf219762cb09085970940a36e653ddd181b6f7a2dbda2fcf
	Nov 15 10:36:06 embed-certs-719574 kubelet[844]: I1115 10:36:06.975624     844 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 15 10:36:10 embed-certs-719574 kubelet[844]: I1115 10:36:10.065579     844 scope.go:117] "RemoveContainer" containerID="9b028c55602adb5be59715c394432d21750e221d9449cfb4f669756ceda768e3"
	Nov 15 10:36:11 embed-certs-719574 kubelet[844]: I1115 10:36:11.070437     844 scope.go:117] "RemoveContainer" containerID="9b028c55602adb5be59715c394432d21750e221d9449cfb4f669756ceda768e3"
	Nov 15 10:36:11 embed-certs-719574 kubelet[844]: I1115 10:36:11.070614     844 scope.go:117] "RemoveContainer" containerID="249a33fbd4befd2570092d53aa7d4841660505a2c65352a0c18a1c74ee3be3c9"
	Nov 15 10:36:11 embed-certs-719574 kubelet[844]: E1115 10:36:11.070805     844 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vknb8_kubernetes-dashboard(4033c9a9-052a-4725-a759-cefe2f0c9a8a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vknb8" podUID="4033c9a9-052a-4725-a759-cefe2f0c9a8a"
	Nov 15 10:36:12 embed-certs-719574 kubelet[844]: I1115 10:36:12.075500     844 scope.go:117] "RemoveContainer" containerID="249a33fbd4befd2570092d53aa7d4841660505a2c65352a0c18a1c74ee3be3c9"
	Nov 15 10:36:12 embed-certs-719574 kubelet[844]: E1115 10:36:12.075709     844 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vknb8_kubernetes-dashboard(4033c9a9-052a-4725-a759-cefe2f0c9a8a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vknb8" podUID="4033c9a9-052a-4725-a759-cefe2f0c9a8a"
	Nov 15 10:36:16 embed-certs-719574 kubelet[844]: I1115 10:36:16.098503     844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tj9l5" podStartSLOduration=1.731529531 podStartE2EDuration="11.09847942s" podCreationTimestamp="2025-11-15 10:36:05 +0000 UTC" firstStartedPulling="2025-11-15 10:36:06.128641375 +0000 UTC m=+8.394806958" lastFinishedPulling="2025-11-15 10:36:15.495591281 +0000 UTC m=+17.761756847" observedRunningTime="2025-11-15 10:36:16.098202322 +0000 UTC m=+18.364367907" watchObservedRunningTime="2025-11-15 10:36:16.09847942 +0000 UTC m=+18.364645006"
	Nov 15 10:36:18 embed-certs-719574 kubelet[844]: I1115 10:36:18.290617     844 scope.go:117] "RemoveContainer" containerID="249a33fbd4befd2570092d53aa7d4841660505a2c65352a0c18a1c74ee3be3c9"
	Nov 15 10:36:18 embed-certs-719574 kubelet[844]: E1115 10:36:18.290800     844 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vknb8_kubernetes-dashboard(4033c9a9-052a-4725-a759-cefe2f0c9a8a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vknb8" podUID="4033c9a9-052a-4725-a759-cefe2f0c9a8a"
	Nov 15 10:36:29 embed-certs-719574 kubelet[844]: I1115 10:36:29.882323     844 scope.go:117] "RemoveContainer" containerID="249a33fbd4befd2570092d53aa7d4841660505a2c65352a0c18a1c74ee3be3c9"
	Nov 15 10:36:30 embed-certs-719574 kubelet[844]: I1115 10:36:30.121924     844 scope.go:117] "RemoveContainer" containerID="249a33fbd4befd2570092d53aa7d4841660505a2c65352a0c18a1c74ee3be3c9"
	Nov 15 10:36:30 embed-certs-719574 kubelet[844]: I1115 10:36:30.122156     844 scope.go:117] "RemoveContainer" containerID="47d1f78e1495806bd1e6bc6db543ba9f35ac22c92ea194e3742c7b61fc3a2da3"
	Nov 15 10:36:30 embed-certs-719574 kubelet[844]: E1115 10:36:30.122374     844 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vknb8_kubernetes-dashboard(4033c9a9-052a-4725-a759-cefe2f0c9a8a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vknb8" podUID="4033c9a9-052a-4725-a759-cefe2f0c9a8a"
	Nov 15 10:36:33 embed-certs-719574 kubelet[844]: I1115 10:36:33.132401     844 scope.go:117] "RemoveContainer" containerID="10b6f8a418fda8c0a43012d2666abd1f951f96d5c6a2315faf72d61ccd752d2f"
	Nov 15 10:36:38 embed-certs-719574 kubelet[844]: I1115 10:36:38.290508     844 scope.go:117] "RemoveContainer" containerID="47d1f78e1495806bd1e6bc6db543ba9f35ac22c92ea194e3742c7b61fc3a2da3"
	Nov 15 10:36:38 embed-certs-719574 kubelet[844]: E1115 10:36:38.290680     844 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vknb8_kubernetes-dashboard(4033c9a9-052a-4725-a759-cefe2f0c9a8a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vknb8" podUID="4033c9a9-052a-4725-a759-cefe2f0c9a8a"
	Nov 15 10:36:50 embed-certs-719574 kubelet[844]: I1115 10:36:50.882416     844 scope.go:117] "RemoveContainer" containerID="47d1f78e1495806bd1e6bc6db543ba9f35ac22c92ea194e3742c7b61fc3a2da3"
	Nov 15 10:36:50 embed-certs-719574 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 10:36:50 embed-certs-719574 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 10:36:50 embed-certs-719574 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [6587299f75a35df796ecc6b64e4a4ce75d90dc27bf9c4ef271dc17d17c347b48] <==
	2025/11/15 10:36:15 Using namespace: kubernetes-dashboard
	2025/11/15 10:36:15 Using in-cluster config to connect to apiserver
	2025/11/15 10:36:15 Using secret token for csrf signing
	2025/11/15 10:36:15 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/15 10:36:15 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/15 10:36:15 Successful initial request to the apiserver, version: v1.34.1
	2025/11/15 10:36:15 Generating JWE encryption key
	2025/11/15 10:36:15 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/15 10:36:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/15 10:36:15 Initializing JWE encryption key from synchronized object
	2025/11/15 10:36:15 Creating in-cluster Sidecar client
	2025/11/15 10:36:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 10:36:15 Serving insecurely on HTTP port: 9090
	2025/11/15 10:36:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 10:36:15 Starting overwatch
	
	
	==> storage-provisioner [10b6f8a418fda8c0a43012d2666abd1f951f96d5c6a2315faf72d61ccd752d2f] <==
	I1115 10:36:02.575597       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1115 10:36:32.646645       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [fb08cb8a4d59b7ee225bd83ec883701f5430ec14c7bf4ecd1bbfd4dc422ad397] <==
	I1115 10:36:33.182285       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 10:36:33.189535       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 10:36:33.189575       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1115 10:36:33.192034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:36.647348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:40.907268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:44.505976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:47.561004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:50.583280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:50.600940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:36:50.601152       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 10:36:50.601269       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"40f0e3ae-7c7f-492f-ba67-375413ad6bff", APIVersion:"v1", ResourceVersion:"675", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-719574_6b94c6a6-9f0d-484a-88f9-f71427654633 became leader
	I1115 10:36:50.601357       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-719574_6b94c6a6-9f0d-484a-88f9-f71427654633!
	W1115 10:36:50.615294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:50.618735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:36:50.701784       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-719574_6b94c6a6-9f0d-484a-88f9-f71427654633!
	W1115 10:36:52.622063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:52.626567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:54.630286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:54.634040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:56.637428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:56.642517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-719574 -n embed-certs-719574
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-719574 -n embed-certs-719574: exit status 2 (330.624217ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-719574 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (5.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-026691 --alsologtostderr -v=1
E1115 10:37:35.974643   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/calico-931243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-026691 --alsologtostderr -v=1: exit status 80 (1.486960532s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-026691 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:37:35.844373  396851 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:37:35.844538  396851 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:37:35.844549  396851 out.go:374] Setting ErrFile to fd 2...
	I1115 10:37:35.844556  396851 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:37:35.844763  396851 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 10:37:35.845051  396851 out.go:368] Setting JSON to false
	I1115 10:37:35.845091  396851 mustload.go:66] Loading cluster: default-k8s-diff-port-026691
	I1115 10:37:35.845481  396851 config.go:182] Loaded profile config "default-k8s-diff-port-026691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:37:35.846315  396851 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:37:35.864606  396851 host.go:66] Checking if "default-k8s-diff-port-026691" exists ...
	I1115 10:37:35.864883  396851 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:37:35.921127  396851 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:55 SystemTime:2025-11-15 10:37:35.910767308 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:37:35.921709  396851 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-026691 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1115 10:37:35.923394  396851 out.go:179] * Pausing node default-k8s-diff-port-026691 ... 
	I1115 10:37:35.924457  396851 host.go:66] Checking if "default-k8s-diff-port-026691" exists ...
	I1115 10:37:35.924741  396851 ssh_runner.go:195] Run: systemctl --version
	I1115 10:37:35.924791  396851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:37:35.942357  396851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:37:36.034350  396851 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:37:36.062397  396851 pause.go:52] kubelet running: true
	I1115 10:37:36.062467  396851 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:37:36.200814  396851 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:37:36.200902  396851 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:37:36.266334  396851 cri.go:89] found id: "c80de6cf1abc0ce1c57707adc801c943013ebf29b303b1b301153401cfb9e7f4"
	I1115 10:37:36.266365  396851 cri.go:89] found id: "c835d06b811afe7277524798594204743e6b5c98eb025ff53b5a2bbdf7a96794"
	I1115 10:37:36.266371  396851 cri.go:89] found id: "13394bc9d7c6680728c8c7f5b7c939c8bf8ddf701e93585d0d249b4debb8779d"
	I1115 10:37:36.266377  396851 cri.go:89] found id: "7066ef5abc4bc0c6c62f762a419ff0ace9bbf240ada62fbf94eea91e68213566"
	I1115 10:37:36.266381  396851 cri.go:89] found id: "de29b76605c3aef2cd62d1da1ab7845a60a8a7dbe6ba39ecfdbf9ae60a3a31d8"
	I1115 10:37:36.266386  396851 cri.go:89] found id: "b04411b3a023360282fda35601339f2000829b38e465e5c4c130b7c58e111bb3"
	I1115 10:37:36.266390  396851 cri.go:89] found id: "971b8e4c2073b59c0dd66594f19cfc1d0b9f81cb1bad24b21946d5ea7b012768"
	I1115 10:37:36.266394  396851 cri.go:89] found id: "6a1db649ea51d34c5661db65312d1c8660828376983f865ce0b3d2801b219c2d"
	I1115 10:37:36.266397  396851 cri.go:89] found id: "58595dd2cf4ce1cb8f740a6594f3ee7a3c7d2587682b4e6c526266c0067303a7"
	I1115 10:37:36.266405  396851 cri.go:89] found id: "8168cf11cc97ab90349686f8d781936b53a8680baa790ca9124eb4fee99df98d"
	I1115 10:37:36.266408  396851 cri.go:89] found id: "b7dca0b8853e890d3a900bd3933f60ce1727d329f35d122cf14fd332ab681fb0"
	I1115 10:37:36.266410  396851 cri.go:89] found id: ""
	I1115 10:37:36.266451  396851 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:37:36.278324  396851 retry.go:31] will retry after 268.723792ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:37:36Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:37:36.547919  396851 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:37:36.561306  396851 pause.go:52] kubelet running: false
	I1115 10:37:36.561378  396851 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:37:36.683704  396851 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:37:36.683791  396851 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:37:36.748597  396851 cri.go:89] found id: "c80de6cf1abc0ce1c57707adc801c943013ebf29b303b1b301153401cfb9e7f4"
	I1115 10:37:36.748618  396851 cri.go:89] found id: "c835d06b811afe7277524798594204743e6b5c98eb025ff53b5a2bbdf7a96794"
	I1115 10:37:36.748622  396851 cri.go:89] found id: "13394bc9d7c6680728c8c7f5b7c939c8bf8ddf701e93585d0d249b4debb8779d"
	I1115 10:37:36.748625  396851 cri.go:89] found id: "7066ef5abc4bc0c6c62f762a419ff0ace9bbf240ada62fbf94eea91e68213566"
	I1115 10:37:36.748628  396851 cri.go:89] found id: "de29b76605c3aef2cd62d1da1ab7845a60a8a7dbe6ba39ecfdbf9ae60a3a31d8"
	I1115 10:37:36.748632  396851 cri.go:89] found id: "b04411b3a023360282fda35601339f2000829b38e465e5c4c130b7c58e111bb3"
	I1115 10:37:36.748634  396851 cri.go:89] found id: "971b8e4c2073b59c0dd66594f19cfc1d0b9f81cb1bad24b21946d5ea7b012768"
	I1115 10:37:36.748637  396851 cri.go:89] found id: "6a1db649ea51d34c5661db65312d1c8660828376983f865ce0b3d2801b219c2d"
	I1115 10:37:36.748640  396851 cri.go:89] found id: "58595dd2cf4ce1cb8f740a6594f3ee7a3c7d2587682b4e6c526266c0067303a7"
	I1115 10:37:36.748648  396851 cri.go:89] found id: "8168cf11cc97ab90349686f8d781936b53a8680baa790ca9124eb4fee99df98d"
	I1115 10:37:36.748651  396851 cri.go:89] found id: "b7dca0b8853e890d3a900bd3933f60ce1727d329f35d122cf14fd332ab681fb0"
	I1115 10:37:36.748653  396851 cri.go:89] found id: ""
	I1115 10:37:36.748692  396851 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:37:36.760253  396851 retry.go:31] will retry after 287.543286ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:37:36Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:37:37.048760  396851 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:37:37.061437  396851 pause.go:52] kubelet running: false
	I1115 10:37:37.061508  396851 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:37:37.183583  396851 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:37:37.183678  396851 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:37:37.250139  396851 cri.go:89] found id: "c80de6cf1abc0ce1c57707adc801c943013ebf29b303b1b301153401cfb9e7f4"
	I1115 10:37:37.250161  396851 cri.go:89] found id: "c835d06b811afe7277524798594204743e6b5c98eb025ff53b5a2bbdf7a96794"
	I1115 10:37:37.250166  396851 cri.go:89] found id: "13394bc9d7c6680728c8c7f5b7c939c8bf8ddf701e93585d0d249b4debb8779d"
	I1115 10:37:37.250169  396851 cri.go:89] found id: "7066ef5abc4bc0c6c62f762a419ff0ace9bbf240ada62fbf94eea91e68213566"
	I1115 10:37:37.250171  396851 cri.go:89] found id: "de29b76605c3aef2cd62d1da1ab7845a60a8a7dbe6ba39ecfdbf9ae60a3a31d8"
	I1115 10:37:37.250174  396851 cri.go:89] found id: "b04411b3a023360282fda35601339f2000829b38e465e5c4c130b7c58e111bb3"
	I1115 10:37:37.250176  396851 cri.go:89] found id: "971b8e4c2073b59c0dd66594f19cfc1d0b9f81cb1bad24b21946d5ea7b012768"
	I1115 10:37:37.250179  396851 cri.go:89] found id: "6a1db649ea51d34c5661db65312d1c8660828376983f865ce0b3d2801b219c2d"
	I1115 10:37:37.250181  396851 cri.go:89] found id: "58595dd2cf4ce1cb8f740a6594f3ee7a3c7d2587682b4e6c526266c0067303a7"
	I1115 10:37:37.250188  396851 cri.go:89] found id: "8168cf11cc97ab90349686f8d781936b53a8680baa790ca9124eb4fee99df98d"
	I1115 10:37:37.250202  396851 cri.go:89] found id: "b7dca0b8853e890d3a900bd3933f60ce1727d329f35d122cf14fd332ab681fb0"
	I1115 10:37:37.250206  396851 cri.go:89] found id: ""
	I1115 10:37:37.250258  396851 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:37:37.263597  396851 out.go:203] 
	W1115 10:37:37.264683  396851 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:37:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:37:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 10:37:37.264700  396851 out.go:285] * 
	* 
	W1115 10:37:37.269894  396851 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 10:37:37.271016  396851 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-026691 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-026691
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-026691:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "acb25a518a850a99b04a9ecc3ee276eeb45082aa05801ad29c50dc8761212798",
	        "Created": "2025-11-15T10:34:56.785604479Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 388621,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:36:31.424936822Z",
	            "FinishedAt": "2025-11-15T10:36:30.544090667Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/acb25a518a850a99b04a9ecc3ee276eeb45082aa05801ad29c50dc8761212798/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/acb25a518a850a99b04a9ecc3ee276eeb45082aa05801ad29c50dc8761212798/hostname",
	        "HostsPath": "/var/lib/docker/containers/acb25a518a850a99b04a9ecc3ee276eeb45082aa05801ad29c50dc8761212798/hosts",
	        "LogPath": "/var/lib/docker/containers/acb25a518a850a99b04a9ecc3ee276eeb45082aa05801ad29c50dc8761212798/acb25a518a850a99b04a9ecc3ee276eeb45082aa05801ad29c50dc8761212798-json.log",
	        "Name": "/default-k8s-diff-port-026691",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-026691:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-026691",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "acb25a518a850a99b04a9ecc3ee276eeb45082aa05801ad29c50dc8761212798",
	                "LowerDir": "/var/lib/docker/overlay2/330082c0af87d9272d4fc35061c9dcf149c94a075cbe50833b0301c28218115b-init/diff:/var/lib/docker/overlay2/507c85b6fb43d9c98216fac79d7a8d08dd20b2a63d0fbfc758336e5d38c04044/diff",
	                "MergedDir": "/var/lib/docker/overlay2/330082c0af87d9272d4fc35061c9dcf149c94a075cbe50833b0301c28218115b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/330082c0af87d9272d4fc35061c9dcf149c94a075cbe50833b0301c28218115b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/330082c0af87d9272d4fc35061c9dcf149c94a075cbe50833b0301c28218115b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-026691",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-026691/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-026691",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-026691",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-026691",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "0ee50bddc8180ffef82c9f1d4c30dce0a82dd04cbdf3ae2c6ad4b2dd0e9c09ca",
	            "SandboxKey": "/var/run/docker/netns/0ee50bddc818",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-026691": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a057ad05bea093d4f46407b93bd0d97f5f0b4004a2f1151b31de55e2e2a06fb7",
	                    "EndpointID": "95ae4a6178ed12ab94e3095f2e9d937b033e8f61777d2b6ba2953a3a6a79f9ec",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "92:be:c7:10:04:57",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-026691",
	                        "acb25a518a85"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-026691 -n default-k8s-diff-port-026691
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-026691 -n default-k8s-diff-port-026691: exit status 2 (314.955366ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-026691 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-026691 logs -n 25: (1.066328386s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p embed-certs-719574 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:36 UTC │
	│ delete  │ -p old-k8s-version-087235                                                                                                                                                                                                                     │ old-k8s-version-087235       │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ start   │ -p newest-cni-086099 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:36 UTC │
	│ image   │ no-preload-283677 image list --format=json                                                                                                                                                                                                    │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ pause   │ -p no-preload-283677 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ delete  │ -p no-preload-283677                                                                                                                                                                                                                          │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ delete  │ -p no-preload-283677                                                                                                                                                                                                                          │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-026691 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-026691 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-026691 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-026691 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ addons  │ enable metrics-server -p newest-cni-086099 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ stop    │ -p newest-cni-086099 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ addons  │ enable dashboard -p newest-cni-086099 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ start   │ -p newest-cni-086099 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-026691 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-026691 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ start   │ -p default-k8s-diff-port-026691 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-026691 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:37 UTC │
	│ image   │ newest-cni-086099 image list --format=json                                                                                                                                                                                                    │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ pause   │ -p newest-cni-086099 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ delete  │ -p newest-cni-086099                                                                                                                                                                                                                          │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ delete  │ -p newest-cni-086099                                                                                                                                                                                                                          │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ image   │ embed-certs-719574 image list --format=json                                                                                                                                                                                                   │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ pause   │ -p embed-certs-719574 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ delete  │ -p embed-certs-719574                                                                                                                                                                                                                         │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ delete  │ -p embed-certs-719574                                                                                                                                                                                                                         │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:37 UTC │
	│ image   │ default-k8s-diff-port-026691 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-026691 │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:37 UTC │
	│ pause   │ -p default-k8s-diff-port-026691 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-026691 │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:36:31
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:36:31.193182  388420 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:36:31.193281  388420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:36:31.193289  388420 out.go:374] Setting ErrFile to fd 2...
	I1115 10:36:31.193293  388420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:36:31.193515  388420 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 10:36:31.193933  388420 out.go:368] Setting JSON to false
	I1115 10:36:31.195111  388420 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8328,"bootTime":1763194663,"procs":266,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 10:36:31.195216  388420 start.go:143] virtualization: kvm guest
	I1115 10:36:31.196894  388420 out.go:179] * [default-k8s-diff-port-026691] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 10:36:31.198076  388420 notify.go:221] Checking for updates...
	I1115 10:36:31.198087  388420 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 10:36:31.199249  388420 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:36:31.200471  388420 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:36:31.201512  388420 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-55448/.minikube
	I1115 10:36:31.202449  388420 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 10:36:31.203634  388420 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:36:31.205205  388420 config.go:182] Loaded profile config "default-k8s-diff-port-026691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:31.205718  388420 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:36:31.228892  388420 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 10:36:31.229044  388420 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:36:31.285898  388420 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:67 SystemTime:2025-11-15 10:36:31.276283811 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:36:31.286032  388420 docker.go:319] overlay module found
	I1115 10:36:31.287655  388420 out.go:179] * Using the docker driver based on existing profile
	I1115 10:36:31.288859  388420 start.go:309] selected driver: docker
	I1115 10:36:31.288877  388420 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-026691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-026691 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:36:31.288972  388420 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:36:31.289812  388420 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:36:31.352009  388420 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:66 SystemTime:2025-11-15 10:36:31.342104199 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:36:31.352371  388420 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:36:31.352408  388420 cni.go:84] Creating CNI manager for ""
	I1115 10:36:31.352457  388420 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:36:31.352498  388420 start.go:353] cluster config:
	{Name:default-k8s-diff-port-026691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-026691 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:36:31.354418  388420 out.go:179] * Starting "default-k8s-diff-port-026691" primary control-plane node in "default-k8s-diff-port-026691" cluster
	I1115 10:36:31.355595  388420 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:36:31.356825  388420 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:36:31.357856  388420 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:36:31.357890  388420 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1115 10:36:31.357905  388420 cache.go:65] Caching tarball of preloaded images
	I1115 10:36:31.357944  388420 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:36:31.358020  388420 preload.go:238] Found /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 10:36:31.358036  388420 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:36:31.358136  388420 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/config.json ...
	I1115 10:36:31.378843  388420 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:36:31.378864  388420 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:36:31.378881  388420 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:36:31.378904  388420 start.go:360] acquireMachinesLock for default-k8s-diff-port-026691: {Name:mk1f3196dd9a24a043fa707553211d0b0ea8c1f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:36:31.378986  388420 start.go:364] duration metric: took 61.257µs to acquireMachinesLock for "default-k8s-diff-port-026691"
	I1115 10:36:31.379010  388420 start.go:96] Skipping create...Using existing machine configuration
	I1115 10:36:31.379018  388420 fix.go:54] fixHost starting: 
	I1115 10:36:31.379252  388420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:36:31.397025  388420 fix.go:112] recreateIfNeeded on default-k8s-diff-port-026691: state=Stopped err=<nil>
	W1115 10:36:31.397068  388420 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 10:36:29.135135  387591 out.go:252] * Restarting existing docker container for "newest-cni-086099" ...
	I1115 10:36:29.135222  387591 cli_runner.go:164] Run: docker start newest-cni-086099
	I1115 10:36:29.412428  387591 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:36:29.431258  387591 kic.go:430] container "newest-cni-086099" state is running.
	I1115 10:36:29.431760  387591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-086099
	I1115 10:36:29.450271  387591 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/config.json ...
	I1115 10:36:29.450487  387591 machine.go:94] provisionDockerMachine start ...
	I1115 10:36:29.450542  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:29.468796  387591 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:29.469141  387591 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1115 10:36:29.469158  387591 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:36:29.469768  387591 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43374->127.0.0.1:33129: read: connection reset by peer
	I1115 10:36:32.597021  387591 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-086099
	
	I1115 10:36:32.597063  387591 ubuntu.go:182] provisioning hostname "newest-cni-086099"
	I1115 10:36:32.597140  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:32.616934  387591 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:32.617209  387591 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1115 10:36:32.617233  387591 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-086099 && echo "newest-cni-086099" | sudo tee /etc/hostname
	I1115 10:36:32.756237  387591 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-086099
	
	I1115 10:36:32.756329  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:32.775168  387591 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:32.775389  387591 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1115 10:36:32.775405  387591 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-086099' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-086099/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-086099' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:36:32.902668  387591 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:36:32.902701  387591 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-55448/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-55448/.minikube}
	I1115 10:36:32.902736  387591 ubuntu.go:190] setting up certificates
	I1115 10:36:32.902754  387591 provision.go:84] configureAuth start
	I1115 10:36:32.902811  387591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-086099
	I1115 10:36:32.921923  387591 provision.go:143] copyHostCerts
	I1115 10:36:32.922017  387591 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem, removing ...
	I1115 10:36:32.922035  387591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem
	I1115 10:36:32.922102  387591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem (1082 bytes)
	I1115 10:36:32.922216  387591 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem, removing ...
	I1115 10:36:32.922225  387591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem
	I1115 10:36:32.922253  387591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem (1123 bytes)
	I1115 10:36:32.922341  387591 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem, removing ...
	I1115 10:36:32.922348  387591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem
	I1115 10:36:32.922372  387591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem (1679 bytes)
	I1115 10:36:32.922421  387591 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem org=jenkins.newest-cni-086099 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-086099]
	I1115 10:36:32.940854  387591 provision.go:177] copyRemoteCerts
	I1115 10:36:32.940914  387591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:36:32.940948  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:32.958931  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:33.053731  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:36:33.071243  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:36:33.088651  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1115 10:36:33.105219  387591 provision.go:87] duration metric: took 202.453369ms to configureAuth
	I1115 10:36:33.105244  387591 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:36:33.105414  387591 config.go:182] Loaded profile config "newest-cni-086099": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:33.105509  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:33.123012  387591 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:33.123259  387591 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1115 10:36:33.123277  387591 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:36:33.389799  387591 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:36:33.389822  387591 machine.go:97] duration metric: took 3.93932207s to provisionDockerMachine
	I1115 10:36:33.389835  387591 start.go:293] postStartSetup for "newest-cni-086099" (driver="docker")
	I1115 10:36:33.389844  387591 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:36:33.389903  387591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:36:33.389946  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:33.409403  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:33.503330  387591 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:36:33.506790  387591 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:36:33.506815  387591 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:36:33.506825  387591 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/addons for local assets ...
	I1115 10:36:33.506878  387591 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/files for local assets ...
	I1115 10:36:33.506995  387591 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem -> 589622.pem in /etc/ssl/certs
	I1115 10:36:33.507126  387591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:36:33.514570  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:36:33.531880  387591 start.go:296] duration metric: took 142.028023ms for postStartSetup
	I1115 10:36:33.532012  387591 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:36:33.532066  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:33.549908  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:33.640348  387591 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:36:33.645124  387591 fix.go:56] duration metric: took 4.529931109s for fixHost
	I1115 10:36:33.645164  387591 start.go:83] releasing machines lock for "newest-cni-086099", held for 4.529982501s
	I1115 10:36:33.645246  387591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-086099
	I1115 10:36:33.663364  387591 ssh_runner.go:195] Run: cat /version.json
	I1115 10:36:33.663400  387591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:36:33.663445  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:33.663461  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:33.682200  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:33.682521  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:33.827221  387591 ssh_runner.go:195] Run: systemctl --version
	I1115 10:36:33.834019  387591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:36:33.868151  387591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:36:33.872995  387591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:36:33.873067  387591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:36:33.881540  387591 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 10:36:33.881563  387591 start.go:496] detecting cgroup driver to use...
	I1115 10:36:33.881595  387591 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:36:33.881628  387591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:36:33.895704  387591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:36:33.907633  387591 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:36:33.907681  387591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:36:33.921408  387591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	W1115 10:36:30.745845  377744 pod_ready.go:104] pod "coredns-66bc5c9577-fjzk5" is not "Ready", error: <nil>
	W1115 10:36:32.746544  377744 pod_ready.go:104] pod "coredns-66bc5c9577-fjzk5" is not "Ready", error: <nil>
	I1115 10:36:33.933689  387591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:36:34.015025  387591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:36:34.097166  387591 docker.go:234] disabling docker service ...
	I1115 10:36:34.097250  387591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:36:34.111501  387591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:36:34.123898  387591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:36:34.208076  387591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:36:34.289077  387591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:36:34.302010  387591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:36:34.316333  387591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:36:34.316409  387591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.325113  387591 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:36:34.325175  387591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.333844  387591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.342343  387591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.350817  387591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:36:34.359269  387591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.368008  387591 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.376100  387591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.384822  387591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:36:34.392091  387591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:36:34.399149  387591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:34.478616  387591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:36:34.580323  387591 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:36:34.580408  387591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:36:34.584509  387591 start.go:564] Will wait 60s for crictl version
	I1115 10:36:34.584568  387591 ssh_runner.go:195] Run: which crictl
	I1115 10:36:34.588078  387591 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:36:34.613070  387591 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:36:34.613150  387591 ssh_runner.go:195] Run: crio --version
	I1115 10:36:34.641080  387591 ssh_runner.go:195] Run: crio --version
	I1115 10:36:34.670335  387591 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:36:34.671690  387591 cli_runner.go:164] Run: docker network inspect newest-cni-086099 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:36:34.689678  387591 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1115 10:36:34.693973  387591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:36:34.705342  387591 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1115 10:36:31.398937  388420 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-026691" ...
	I1115 10:36:31.399016  388420 cli_runner.go:164] Run: docker start default-k8s-diff-port-026691
	I1115 10:36:31.676189  388420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:36:31.694382  388420 kic.go:430] container "default-k8s-diff-port-026691" state is running.
	I1115 10:36:31.694751  388420 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-026691
	I1115 10:36:31.713425  388420 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/config.json ...
	I1115 10:36:31.713652  388420 machine.go:94] provisionDockerMachine start ...
	I1115 10:36:31.713746  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:31.732991  388420 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:31.733252  388420 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1115 10:36:31.733277  388420 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:36:31.734038  388420 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45950->127.0.0.1:33134: read: connection reset by peer
	I1115 10:36:34.867843  388420 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-026691
	
	I1115 10:36:34.867883  388420 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-026691"
	I1115 10:36:34.868072  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:34.887800  388420 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:34.888079  388420 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1115 10:36:34.888098  388420 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-026691 && echo "default-k8s-diff-port-026691" | sudo tee /etc/hostname
	I1115 10:36:35.027312  388420 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-026691
	
	I1115 10:36:35.027402  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:35.049307  388420 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:35.049620  388420 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1115 10:36:35.049653  388420 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-026691' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-026691/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-026691' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:36:35.185792  388420 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:36:35.185824  388420 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-55448/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-55448/.minikube}
	I1115 10:36:35.185877  388420 ubuntu.go:190] setting up certificates
	I1115 10:36:35.185889  388420 provision.go:84] configureAuth start
	I1115 10:36:35.185975  388420 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-026691
	I1115 10:36:35.205215  388420 provision.go:143] copyHostCerts
	I1115 10:36:35.205302  388420 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem, removing ...
	I1115 10:36:35.205325  388420 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem
	I1115 10:36:35.205419  388420 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem (1082 bytes)
	I1115 10:36:35.205578  388420 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem, removing ...
	I1115 10:36:35.205600  388420 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem
	I1115 10:36:35.205648  388420 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem (1123 bytes)
	I1115 10:36:35.205811  388420 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem, removing ...
	I1115 10:36:35.205831  388420 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem
	I1115 10:36:35.205877  388420 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem (1679 bytes)
	I1115 10:36:35.205988  388420 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-026691 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-026691 localhost minikube]
	I1115 10:36:35.356382  388420 provision.go:177] copyRemoteCerts
	I1115 10:36:35.356441  388420 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:36:35.356476  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:35.375752  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:35.470476  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:36:35.488150  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1115 10:36:35.505264  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:36:35.522854  388420 provision.go:87] duration metric: took 336.947608ms to configureAuth
	I1115 10:36:35.522880  388420 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:36:35.523120  388420 config.go:182] Loaded profile config "default-k8s-diff-port-026691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:35.523282  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:35.543167  388420 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:35.543480  388420 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1115 10:36:35.543509  388420 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:36:35.848476  388420 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:36:35.848509  388420 machine.go:97] duration metric: took 4.134839636s to provisionDockerMachine
	I1115 10:36:35.848525  388420 start.go:293] postStartSetup for "default-k8s-diff-port-026691" (driver="docker")
	I1115 10:36:35.848541  388420 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:36:35.848616  388420 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:36:35.848671  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:35.868537  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:35.963605  388420 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:36:35.967175  388420 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:36:35.967199  388420 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:36:35.967209  388420 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/addons for local assets ...
	I1115 10:36:35.967263  388420 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/files for local assets ...
	I1115 10:36:35.967339  388420 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem -> 589622.pem in /etc/ssl/certs
	I1115 10:36:35.967422  388420 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:36:35.975404  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:36:35.992754  388420 start.go:296] duration metric: took 144.211835ms for postStartSetup
	I1115 10:36:35.992851  388420 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:36:35.992902  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:36.010853  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:36.106652  388420 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:36:36.111301  388420 fix.go:56] duration metric: took 4.732276816s for fixHost
	I1115 10:36:36.111327  388420 start.go:83] releasing machines lock for "default-k8s-diff-port-026691", held for 4.732326241s
	I1115 10:36:36.111401  388420 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-026691
	I1115 10:36:36.133087  388420 ssh_runner.go:195] Run: cat /version.json
	I1115 10:36:36.133147  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:36.133224  388420 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:36:36.133295  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:36.161597  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:36.162169  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:34.706341  387591 kubeadm.go:884] updating cluster {Name:newest-cni-086099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-086099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:36:34.706463  387591 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:36:34.706520  387591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:36:34.737832  387591 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:36:34.737871  387591 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:36:34.737929  387591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:36:34.765628  387591 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:36:34.765650  387591 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:36:34.765657  387591 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1115 10:36:34.765750  387591 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-086099 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-086099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:36:34.765813  387591 ssh_runner.go:195] Run: crio config
	I1115 10:36:34.812764  387591 cni.go:84] Creating CNI manager for ""
	I1115 10:36:34.812787  387591 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:36:34.812806  387591 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1115 10:36:34.812836  387591 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-086099 NodeName:newest-cni-086099 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:36:34.813018  387591 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-086099"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:36:34.813097  387591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:36:34.821514  387591 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:36:34.821582  387591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:36:34.829425  387591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1115 10:36:34.841803  387591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:36:34.854099  387591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1115 10:36:34.867123  387591 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:36:34.871300  387591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:36:34.882157  387591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:34.965624  387591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:36:34.991396  387591 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099 for IP: 192.168.103.2
	I1115 10:36:34.991421  387591 certs.go:195] generating shared ca certs ...
	I1115 10:36:34.991442  387591 certs.go:227] acquiring lock for ca certs: {Name:mkec8c0ab2b564f4fafe1f7fc86089efa994f6db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:34.991611  387591 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key
	I1115 10:36:34.991670  387591 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key
	I1115 10:36:34.991685  387591 certs.go:257] generating profile certs ...
	I1115 10:36:34.991800  387591 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/client.key
	I1115 10:36:34.991881  387591 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.key.d719cdad
	I1115 10:36:34.991938  387591 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.key
	I1115 10:36:34.992114  387591 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem (1338 bytes)
	W1115 10:36:34.992160  387591 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962_empty.pem, impossibly tiny 0 bytes
	I1115 10:36:34.992182  387591 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:36:34.992223  387591 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:36:34.992266  387591 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:36:34.992298  387591 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem (1679 bytes)
	I1115 10:36:34.992360  387591 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:36:34.993060  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:36:35.012346  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:36:35.032525  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:36:35.052616  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:36:35.116969  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1115 10:36:35.141400  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 10:36:35.160318  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:36:35.178367  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:36:35.231343  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /usr/share/ca-certificates/589622.pem (1708 bytes)
	I1115 10:36:35.251073  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:36:35.269574  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem --> /usr/share/ca-certificates/58962.pem (1338 bytes)
	I1115 10:36:35.287839  387591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:36:35.300609  387591 ssh_runner.go:195] Run: openssl version
	I1115 10:36:35.306757  387591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/58962.pem && ln -fs /usr/share/ca-certificates/58962.pem /etc/ssl/certs/58962.pem"
	I1115 10:36:35.315111  387591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/58962.pem
	I1115 10:36:35.318673  387591 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:47 /usr/share/ca-certificates/58962.pem
	I1115 10:36:35.318726  387591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/58962.pem
	I1115 10:36:35.352595  387591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/58962.pem /etc/ssl/certs/51391683.0"
	I1115 10:36:35.360661  387591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/589622.pem && ln -fs /usr/share/ca-certificates/589622.pem /etc/ssl/certs/589622.pem"
	I1115 10:36:35.369044  387591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/589622.pem
	I1115 10:36:35.373102  387591 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:47 /usr/share/ca-certificates/589622.pem
	I1115 10:36:35.373149  387591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/589622.pem
	I1115 10:36:35.407763  387591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/589622.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:36:35.416805  387591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:36:35.426105  387591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:35.429879  387591 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:41 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:35.429928  387591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:35.464376  387591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:36:35.472689  387591 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:36:35.476537  387591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 10:36:35.513422  387591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 10:36:35.552107  387591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 10:36:35.627892  387591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 10:36:35.738207  387591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 10:36:35.927631  387591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 10:36:36.020791  387591 kubeadm.go:401] StartCluster: {Name:newest-cni-086099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-086099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:36:36.020915  387591 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:36:36.020993  387591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:36:36.054712  387591 cri.go:89] found id: "38ec6363bcab12e98490c10ae199bee3e96e76c3b252eb09eaacbceb79f4fee3"
	I1115 10:36:36.054741  387591 cri.go:89] found id: "dcddb7cd9963badc91f7602efc0c7dd3a6aa66928df5288774cb992a0f211b2c"
	I1115 10:36:36.054748  387591 cri.go:89] found id: "938d8a7a407d113893b3543524a1f018292ab3c06ad38c37347f5c09b4f19aed"
	I1115 10:36:36.054753  387591 cri.go:89] found id: "6799daac297c17fbe94a59a4b23a00cc23bdfe7671f3f31f803b074be4ef25a5"
	I1115 10:36:36.054758  387591 cri.go:89] found id: ""
	I1115 10:36:36.054810  387591 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 10:36:36.122342  387591 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:36:36Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:36:36.122434  387591 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:36:36.132788  387591 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 10:36:36.132807  387591 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 10:36:36.132853  387591 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 10:36:36.144175  387591 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:36:36.145209  387591 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-086099" does not appear in /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:36:36.145870  387591 kubeconfig.go:62] /home/jenkins/minikube-integration/21894-55448/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-086099" cluster setting kubeconfig missing "newest-cni-086099" context setting]
	I1115 10:36:36.146847  387591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:36.149871  387591 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 10:36:36.217177  387591 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1115 10:36:36.217217  387591 kubeadm.go:602] duration metric: took 84.40299ms to restartPrimaryControlPlane
	I1115 10:36:36.217231  387591 kubeadm.go:403] duration metric: took 196.454161ms to StartCluster
	I1115 10:36:36.217253  387591 settings.go:142] acquiring lock: {Name:mk94cfac0f6eef2479181468a7a8082a6e5c8f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:36.217343  387591 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:36:36.218632  387591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:36.218872  387591 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:36:36.218972  387591 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:36:36.219074  387591 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-086099"
	I1115 10:36:36.219094  387591 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-086099"
	W1115 10:36:36.219105  387591 addons.go:248] addon storage-provisioner should already be in state true
	I1115 10:36:36.219138  387591 host.go:66] Checking if "newest-cni-086099" exists ...
	I1115 10:36:36.219158  387591 addons.go:70] Setting dashboard=true in profile "newest-cni-086099"
	I1115 10:36:36.219163  387591 config.go:182] Loaded profile config "newest-cni-086099": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:36.219193  387591 addons.go:239] Setting addon dashboard=true in "newest-cni-086099"
	W1115 10:36:36.219202  387591 addons.go:248] addon dashboard should already be in state true
	I1115 10:36:36.219217  387591 addons.go:70] Setting default-storageclass=true in profile "newest-cni-086099"
	I1115 10:36:36.219235  387591 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-086099"
	I1115 10:36:36.219248  387591 host.go:66] Checking if "newest-cni-086099" exists ...
	I1115 10:36:36.219557  387591 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:36:36.219712  387591 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:36:36.219712  387591 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:36:36.220680  387591 out.go:179] * Verifying Kubernetes components...
	I1115 10:36:36.221665  387591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:36.248161  387591 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:36:36.248172  387591 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1115 10:36:36.249608  387591 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:36:36.249628  387591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:36:36.249683  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:36.249733  387591 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1115 10:36:36.324481  388420 ssh_runner.go:195] Run: systemctl --version
	I1115 10:36:36.336623  388420 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:36:36.372576  388420 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:36:36.377572  388420 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:36:36.377633  388420 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:36:36.385687  388420 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 10:36:36.385710  388420 start.go:496] detecting cgroup driver to use...
	I1115 10:36:36.385740  388420 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:36:36.385776  388420 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:36:36.399728  388420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:36:36.411622  388420 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:36:36.411694  388420 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:36:36.431786  388420 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:36:36.449270  388420 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:36:36.538378  388420 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:36:36.622459  388420 docker.go:234] disabling docker service ...
	I1115 10:36:36.622563  388420 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:36:36.644022  388420 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:36:36.656349  388420 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:36:36.757453  388420 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:36:36.851752  388420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:36:36.864024  388420 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:36:36.878189  388420 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:36:36.878243  388420 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.886869  388420 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:36:36.886944  388420 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.895649  388420 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.904129  388420 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.912660  388420 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:36:36.922601  388420 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.934730  388420 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.945527  388420 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.955227  388420 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:36:36.962702  388420 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:36:36.969927  388420 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:37.064102  388420 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:36:37.181392  388420 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:36:37.181469  388420 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:36:37.185705  388420 start.go:564] Will wait 60s for crictl version
	I1115 10:36:37.185759  388420 ssh_runner.go:195] Run: which crictl
	I1115 10:36:37.189374  388420 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:36:37.214797  388420 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:36:37.214872  388420 ssh_runner.go:195] Run: crio --version
	I1115 10:36:37.247024  388420 ssh_runner.go:195] Run: crio --version
	I1115 10:36:37.283127  388420 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1115 10:36:35.246243  377744 pod_ready.go:104] pod "coredns-66bc5c9577-fjzk5" is not "Ready", error: <nil>
	I1115 10:36:37.246256  377744 pod_ready.go:94] pod "coredns-66bc5c9577-fjzk5" is "Ready"
	I1115 10:36:37.246283  377744 pod_ready.go:86] duration metric: took 33.505674032s for pod "coredns-66bc5c9577-fjzk5" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.248931  377744 pod_ready.go:83] waiting for pod "etcd-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.253449  377744 pod_ready.go:94] pod "etcd-embed-certs-719574" is "Ready"
	I1115 10:36:37.253477  377744 pod_ready.go:86] duration metric: took 4.523106ms for pod "etcd-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.258749  377744 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.262996  377744 pod_ready.go:94] pod "kube-apiserver-embed-certs-719574" is "Ready"
	I1115 10:36:37.263019  377744 pod_ready.go:86] duration metric: took 4.2473ms for pod "kube-apiserver-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.265400  377744 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.444138  377744 pod_ready.go:94] pod "kube-controller-manager-embed-certs-719574" is "Ready"
	I1115 10:36:37.444168  377744 pod_ready.go:86] duration metric: took 178.743562ms for pod "kube-controller-manager-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.644722  377744 pod_ready.go:83] waiting for pod "kube-proxy-kmc8c" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:38.044247  377744 pod_ready.go:94] pod "kube-proxy-kmc8c" is "Ready"
	I1115 10:36:38.044277  377744 pod_ready.go:86] duration metric: took 399.527336ms for pod "kube-proxy-kmc8c" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:38.245350  377744 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:38.644894  377744 pod_ready.go:94] pod "kube-scheduler-embed-certs-719574" is "Ready"
	I1115 10:36:38.645014  377744 pod_ready.go:86] duration metric: took 399.62796ms for pod "kube-scheduler-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:38.645030  377744 pod_ready.go:40] duration metric: took 34.90782271s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:36:38.702511  377744 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:36:38.706562  377744 out.go:179] * Done! kubectl is now configured to use "embed-certs-719574" cluster and "default" namespace by default
	I1115 10:36:37.284492  388420 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-026691 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:36:37.302095  388420 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1115 10:36:37.306321  388420 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:36:37.316768  388420 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-026691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-026691 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:36:37.316911  388420 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:36:37.316980  388420 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:36:37.354039  388420 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:36:37.354063  388420 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:36:37.354121  388420 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:36:37.384223  388420 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:36:37.384249  388420 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:36:37.384257  388420 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1115 10:36:37.384353  388420 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-026691 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-026691 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:36:37.384416  388420 ssh_runner.go:195] Run: crio config
	I1115 10:36:37.429588  388420 cni.go:84] Creating CNI manager for ""
	I1115 10:36:37.429616  388420 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:36:37.429637  388420 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:36:37.429663  388420 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-026691 NodeName:default-k8s-diff-port-026691 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:36:37.429840  388420 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-026691"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:36:37.429922  388420 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:36:37.438488  388420 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:36:37.438583  388420 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:36:37.446984  388420 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1115 10:36:37.459608  388420 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:36:37.472652  388420 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1115 10:36:37.484924  388420 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:36:37.488541  388420 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:36:37.498126  388420 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:37.587175  388420 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:36:37.609456  388420 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691 for IP: 192.168.85.2
	I1115 10:36:37.609480  388420 certs.go:195] generating shared ca certs ...
	I1115 10:36:37.609501  388420 certs.go:227] acquiring lock for ca certs: {Name:mkec8c0ab2b564f4fafe1f7fc86089efa994f6db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:37.609671  388420 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key
	I1115 10:36:37.609735  388420 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key
	I1115 10:36:37.609750  388420 certs.go:257] generating profile certs ...
	I1115 10:36:37.609859  388420 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/client.key
	I1115 10:36:37.609921  388420 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.key.f8824eec
	I1115 10:36:37.610007  388420 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.key
	I1115 10:36:37.610146  388420 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem (1338 bytes)
	W1115 10:36:37.610198  388420 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962_empty.pem, impossibly tiny 0 bytes
	I1115 10:36:37.610212  388420 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:36:37.610244  388420 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:36:37.610278  388420 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:36:37.610306  388420 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem (1679 bytes)
	I1115 10:36:37.610359  388420 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:36:37.611122  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:36:37.629925  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:36:37.650833  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:36:37.671862  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:36:37.696427  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1115 10:36:37.763348  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 10:36:37.782654  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:36:37.800720  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:36:37.817628  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:36:37.835327  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem --> /usr/share/ca-certificates/58962.pem (1338 bytes)
	I1115 10:36:37.856769  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /usr/share/ca-certificates/589622.pem (1708 bytes)
	I1115 10:36:37.876039  388420 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:36:37.891255  388420 ssh_runner.go:195] Run: openssl version
	I1115 10:36:37.898994  388420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/58962.pem && ln -fs /usr/share/ca-certificates/58962.pem /etc/ssl/certs/58962.pem"
	I1115 10:36:37.907571  388420 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/58962.pem
	I1115 10:36:37.912280  388420 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:47 /usr/share/ca-certificates/58962.pem
	I1115 10:36:37.912337  388420 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/58962.pem
	I1115 10:36:37.950692  388420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/58962.pem /etc/ssl/certs/51391683.0"
	I1115 10:36:37.959456  388420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/589622.pem && ln -fs /usr/share/ca-certificates/589622.pem /etc/ssl/certs/589622.pem"
	I1115 10:36:37.968450  388420 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/589622.pem
	I1115 10:36:37.972465  388420 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:47 /usr/share/ca-certificates/589622.pem
	I1115 10:36:37.972521  388420 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/589622.pem
	I1115 10:36:38.008129  388420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/589622.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:36:38.016745  388420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:36:38.027414  388420 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:38.031718  388420 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:41 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:38.031792  388420 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:38.077405  388420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:36:38.086004  388420 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:36:38.089990  388420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 10:36:38.127939  388420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 10:36:38.181791  388420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 10:36:38.256153  388420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 10:36:38.368577  388420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 10:36:38.543333  388420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 10:36:38.645754  388420 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-026691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-026691 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:36:38.645863  388420 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:36:38.645935  388420 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:36:38.685210  388420 cri.go:89] found id: "b04411b3a023360282fda35601339f2000829b38e465e5c4c130b7c58e111bb3"
	I1115 10:36:38.685237  388420 cri.go:89] found id: "971b8e4c2073b59c0dd66594f19cfc1d0b9f81cb1bad24b21946d5ea7b012768"
	I1115 10:36:38.685254  388420 cri.go:89] found id: "6a1db649ea51d34c5661db65312d1c8660828376983f865ce0b3d2801b219c2d"
	I1115 10:36:38.685259  388420 cri.go:89] found id: "58595dd2cf4ce1cb8f740a6594f3ee7a3c7d2587682b4e6c526266c0067303a7"
	I1115 10:36:38.685262  388420 cri.go:89] found id: ""
	I1115 10:36:38.685312  388420 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 10:36:38.750674  388420 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:36:38Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:36:38.750744  388420 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:36:38.769157  388420 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 10:36:38.769186  388420 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 10:36:38.769238  388420 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 10:36:38.842499  388420 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:36:38.845337  388420 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-026691" does not appear in /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:36:38.846840  388420 kubeconfig.go:62] /home/jenkins/minikube-integration/21894-55448/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-026691" cluster setting kubeconfig missing "default-k8s-diff-port-026691" context setting]
	I1115 10:36:38.849516  388420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:38.855210  388420 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 10:36:38.870026  388420 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1115 10:36:38.870059  388420 kubeadm.go:602] duration metric: took 100.86647ms to restartPrimaryControlPlane
	I1115 10:36:38.870073  388420 kubeadm.go:403] duration metric: took 224.328768ms to StartCluster
	I1115 10:36:38.870094  388420 settings.go:142] acquiring lock: {Name:mk94cfac0f6eef2479181468a7a8082a6e5c8f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:38.870172  388420 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:36:38.872536  388420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:38.872812  388420 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:36:38.873059  388420 config.go:182] Loaded profile config "default-k8s-diff-port-026691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:38.873024  388420 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:36:38.873181  388420 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-026691"
	I1115 10:36:38.873220  388420 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-026691"
	W1115 10:36:38.873240  388420 addons.go:248] addon storage-provisioner should already be in state true
	I1115 10:36:38.873315  388420 host.go:66] Checking if "default-k8s-diff-port-026691" exists ...
	I1115 10:36:38.873258  388420 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-026691"
	I1115 10:36:38.873640  388420 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-026691"
	W1115 10:36:38.873663  388420 addons.go:248] addon dashboard should already be in state true
	I1115 10:36:38.873444  388420 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-026691"
	I1115 10:36:38.873728  388420 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-026691"
	I1115 10:36:38.873753  388420 host.go:66] Checking if "default-k8s-diff-port-026691" exists ...
	I1115 10:36:38.874091  388420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:36:38.874589  388420 out.go:179] * Verifying Kubernetes components...
	I1115 10:36:38.874818  388420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:36:38.875168  388420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:36:38.876706  388420 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:38.907308  388420 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:36:38.907363  388420 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-026691"
	W1115 10:36:38.907464  388420 addons.go:248] addon default-storageclass should already be in state true
	I1115 10:36:38.907503  388420 host.go:66] Checking if "default-k8s-diff-port-026691" exists ...
	I1115 10:36:38.908043  388420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:36:38.912208  388420 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:36:38.912236  388420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:36:38.912295  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:38.915346  388420 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1115 10:36:38.916793  388420 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1115 10:36:36.250323  387591 addons.go:239] Setting addon default-storageclass=true in "newest-cni-086099"
	W1115 10:36:36.250350  387591 addons.go:248] addon default-storageclass should already be in state true
	I1115 10:36:36.250389  387591 host.go:66] Checking if "newest-cni-086099" exists ...
	I1115 10:36:36.251476  387591 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:36:36.255103  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1115 10:36:36.255128  387591 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1115 10:36:36.255190  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:36.278537  387591 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:36:36.278565  387591 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:36:36.278644  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:36.280814  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:36.281721  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:36.296440  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:36.630526  387591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:36:36.633566  387591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:36:36.636633  387591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:36:36.638099  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1115 10:36:36.638116  387591 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1115 10:36:36.724472  387591 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:36:36.724559  387591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:36:36.729948  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1115 10:36:36.730015  387591 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1115 10:36:36.826253  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1115 10:36:36.826282  387591 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1115 10:36:36.843537  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1115 10:36:36.843560  387591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1115 10:36:36.931895  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1115 10:36:36.931924  387591 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1115 10:36:36.945766  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1115 10:36:36.945791  387591 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1115 10:36:37.023562  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1115 10:36:37.023593  387591 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1115 10:36:37.038918  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1115 10:36:37.038944  387591 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1115 10:36:37.052909  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:36:37.052937  387591 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1115 10:36:37.119950  387591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:36:39.816288  387591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.182684264s)
	I1115 10:36:40.959315  387591 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.234727667s)
	I1115 10:36:40.959363  387591 api_server.go:72] duration metric: took 4.740464162s to wait for apiserver process to appear ...
	I1115 10:36:40.959371  387591 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:36:40.959395  387591 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1115 10:36:40.959325  387591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.322653976s)
	I1115 10:36:40.959440  387591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.839423734s)
	I1115 10:36:40.962518  387591 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-086099 addons enable metrics-server
	
	I1115 10:36:40.964092  387591 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1115 10:36:38.917819  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1115 10:36:38.917851  388420 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1115 10:36:38.917924  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:38.930932  388420 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:36:38.930982  388420 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:36:38.931053  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:38.933702  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:38.939670  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:38.960258  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:39.257807  388420 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:36:39.264707  388420 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:36:39.270235  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1115 10:36:39.270261  388420 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1115 10:36:39.274532  388420 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-026691" to be "Ready" ...
	I1115 10:36:39.351682  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1115 10:36:39.351725  388420 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1115 10:36:39.357989  388420 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:36:39.374984  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1115 10:36:39.375011  388420 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1115 10:36:39.457352  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1115 10:36:39.457377  388420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1115 10:36:39.542591  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1115 10:36:39.542618  388420 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1115 10:36:39.565925  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1115 10:36:39.566041  388420 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1115 10:36:39.580123  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1115 10:36:39.580242  388420 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1115 10:36:39.655102  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1115 10:36:39.655149  388420 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1115 10:36:39.669218  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:36:39.669246  388420 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1115 10:36:39.683183  388420 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:36:40.965416  387591 addons.go:515] duration metric: took 4.746465999s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1115 10:36:40.965454  387591 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:36:40.965477  387591 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:36:41.460167  387591 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1115 10:36:41.465475  387591 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1115 10:36:41.466642  387591 api_server.go:141] control plane version: v1.34.1
	I1115 10:36:41.466668  387591 api_server.go:131] duration metric: took 507.289044ms to wait for apiserver health ...
	I1115 10:36:41.466679  387591 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:36:41.470116  387591 system_pods.go:59] 8 kube-system pods found
	I1115 10:36:41.470165  387591 system_pods.go:61] "coredns-66bc5c9577-rblh2" [903029e0-3b15-43f3-836a-884de528cbc9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1115 10:36:41.470180  387591 system_pods.go:61] "etcd-newest-cni-086099" [6768a007-08a6-47b0-9917-cf54f577829b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:36:41.470190  387591 system_pods.go:61] "kindnet-2h7mm" [1b25f4e6-5f26-42ce-8ceb-56003682c785] Running
	I1115 10:36:41.470200  387591 system_pods.go:61] "kube-apiserver-newest-cni-086099" [3ca22829-f679-44bf-94e5-e4a368e13dcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:36:41.470210  387591 system_pods.go:61] "kube-controller-manager-newest-cni-086099" [1f45f32a-2d9e-49c0-9c69-d2aa59324564] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:36:41.470219  387591 system_pods.go:61] "kube-proxy-6jpzt" [7409c19f-472b-4074-81d0-8e43ac2bc9d4] Running
	I1115 10:36:41.470226  387591 system_pods.go:61] "kube-scheduler-newest-cni-086099" [c3510e0f-9b51-4fb5-bc6e-d0e47be8f5ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:36:41.470235  387591 system_pods.go:61] "storage-provisioner" [23166a3f-bb02-48ca-ab00-721c8c46525d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1115 10:36:41.470247  387591 system_pods.go:74] duration metric: took 3.560608ms to wait for pod list to return data ...
	I1115 10:36:41.470262  387591 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:36:41.472726  387591 default_sa.go:45] found service account: "default"
	I1115 10:36:41.472751  387591 default_sa.go:55] duration metric: took 2.478273ms for default service account to be created ...
	I1115 10:36:41.472765  387591 kubeadm.go:587] duration metric: took 5.253867745s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1115 10:36:41.472786  387591 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:36:41.475250  387591 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:36:41.475273  387591 node_conditions.go:123] node cpu capacity is 8
	I1115 10:36:41.475284  387591 node_conditions.go:105] duration metric: took 2.490696ms to run NodePressure ...
	I1115 10:36:41.475297  387591 start.go:242] waiting for startup goroutines ...
	I1115 10:36:41.475306  387591 start.go:247] waiting for cluster config update ...
	I1115 10:36:41.475322  387591 start.go:256] writing updated cluster config ...
	I1115 10:36:41.475622  387591 ssh_runner.go:195] Run: rm -f paused
	I1115 10:36:41.529383  387591 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:36:41.531753  387591 out.go:179] * Done! kubectl is now configured to use "newest-cni-086099" cluster and "default" namespace by default
	I1115 10:36:42.149798  388420 node_ready.go:49] node "default-k8s-diff-port-026691" is "Ready"
	I1115 10:36:42.149832  388420 node_ready.go:38] duration metric: took 2.87526393s for node "default-k8s-diff-port-026691" to be "Ready" ...
	I1115 10:36:42.149851  388420 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:36:42.149915  388420 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:36:43.654191  388420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.38943226s)
	I1115 10:36:43.654229  388420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.29621492s)
	I1115 10:36:43.654402  388420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.971169317s)
	I1115 10:36:43.654437  388420 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.50449925s)
	I1115 10:36:43.654474  388420 api_server.go:72] duration metric: took 4.78163246s to wait for apiserver process to appear ...
	I1115 10:36:43.654482  388420 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:36:43.654504  388420 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1115 10:36:43.655988  388420 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-026691 addons enable metrics-server
	
	I1115 10:36:43.659469  388420 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:36:43.659501  388420 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:36:43.660788  388420 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1115 10:36:43.661827  388420 addons.go:515] duration metric: took 4.788813528s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1115 10:36:44.155099  388420 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1115 10:36:44.160271  388420 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1115 10:36:44.161286  388420 api_server.go:141] control plane version: v1.34.1
	I1115 10:36:44.161316  388420 api_server.go:131] duration metric: took 506.825578ms to wait for apiserver health ...
	I1115 10:36:44.161327  388420 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:36:44.164559  388420 system_pods.go:59] 8 kube-system pods found
	I1115 10:36:44.164606  388420 system_pods.go:61] "coredns-66bc5c9577-5q2j4" [e6c4ca54-e0fe-45ee-88a6-33bdccbb876c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:36:44.164622  388420 system_pods.go:61] "etcd-default-k8s-diff-port-026691" [f8228efa-91a3-41fb-9952-3b4063ddf162] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:36:44.164631  388420 system_pods.go:61] "kindnet-hjdrk" [9e1f7579-f5a2-44cd-b77f-71219cd8827d] Running
	I1115 10:36:44.164645  388420 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-026691" [9da86b3b-bfcd-4c40-ac86-ee9d9faeb9b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:36:44.164658  388420 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-026691" [23d9be87-1de1-4c38-b65e-b084cf0fed25] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:36:44.164667  388420 system_pods.go:61] "kube-proxy-c5bw5" [ee48d34b-ae60-4a03-a7bd-df76e089eebb] Running
	I1115 10:36:44.164677  388420 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-026691" [52b3711f-015c-4af6-8672-58642cbc0c16] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:36:44.164686  388420 system_pods.go:61] "storage-provisioner" [7dedf7a9-415d-4260-b225-7ca171744768] Running
	I1115 10:36:44.164696  388420 system_pods.go:74] duration metric: took 3.356326ms to wait for pod list to return data ...
	I1115 10:36:44.164709  388420 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:36:44.166570  388420 default_sa.go:45] found service account: "default"
	I1115 10:36:44.166593  388420 default_sa.go:55] duration metric: took 1.872347ms for default service account to be created ...
	I1115 10:36:44.166603  388420 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:36:44.169425  388420 system_pods.go:86] 8 kube-system pods found
	I1115 10:36:44.169450  388420 system_pods.go:89] "coredns-66bc5c9577-5q2j4" [e6c4ca54-e0fe-45ee-88a6-33bdccbb876c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:36:44.169459  388420 system_pods.go:89] "etcd-default-k8s-diff-port-026691" [f8228efa-91a3-41fb-9952-3b4063ddf162] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:36:44.169467  388420 system_pods.go:89] "kindnet-hjdrk" [9e1f7579-f5a2-44cd-b77f-71219cd8827d] Running
	I1115 10:36:44.169472  388420 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-026691" [9da86b3b-bfcd-4c40-ac86-ee9d9faeb9b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:36:44.169482  388420 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-026691" [23d9be87-1de1-4c38-b65e-b084cf0fed25] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:36:44.169497  388420 system_pods.go:89] "kube-proxy-c5bw5" [ee48d34b-ae60-4a03-a7bd-df76e089eebb] Running
	I1115 10:36:44.169512  388420 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-026691" [52b3711f-015c-4af6-8672-58642cbc0c16] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:36:44.169521  388420 system_pods.go:89] "storage-provisioner" [7dedf7a9-415d-4260-b225-7ca171744768] Running
	I1115 10:36:44.169532  388420 system_pods.go:126] duration metric: took 2.922555ms to wait for k8s-apps to be running ...
	I1115 10:36:44.169541  388420 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:36:44.169593  388420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:36:44.183310  388420 system_svc.go:56] duration metric: took 13.759187ms WaitForService to wait for kubelet
	I1115 10:36:44.183342  388420 kubeadm.go:587] duration metric: took 5.310501278s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:36:44.183366  388420 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:36:44.186800  388420 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:36:44.186826  388420 node_conditions.go:123] node cpu capacity is 8
	I1115 10:36:44.186843  388420 node_conditions.go:105] duration metric: took 3.463462ms to run NodePressure ...
	I1115 10:36:44.186859  388420 start.go:242] waiting for startup goroutines ...
	I1115 10:36:44.186872  388420 start.go:247] waiting for cluster config update ...
	I1115 10:36:44.186896  388420 start.go:256] writing updated cluster config ...
	I1115 10:36:44.187247  388420 ssh_runner.go:195] Run: rm -f paused
	I1115 10:36:44.191349  388420 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:36:44.194864  388420 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5q2j4" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 10:36:46.200419  388420 pod_ready.go:104] pod "coredns-66bc5c9577-5q2j4" is not "Ready", error: <nil>
	W1115 10:36:48.202278  388420 pod_ready.go:104] pod "coredns-66bc5c9577-5q2j4" is not "Ready", error: <nil>
	W1115 10:36:50.700646  388420 pod_ready.go:104] pod "coredns-66bc5c9577-5q2j4" is not "Ready", error: <nil>
	W1115 10:36:53.200685  388420 pod_ready.go:104] pod "coredns-66bc5c9577-5q2j4" is not "Ready", error: <nil>
	W1115 10:36:55.201458  388420 pod_ready.go:104] pod "coredns-66bc5c9577-5q2j4" is not "Ready", error: <nil>
	W1115 10:36:57.700358  388420 pod_ready.go:104] pod "coredns-66bc5c9577-5q2j4" is not "Ready", error: <nil>
	W1115 10:36:59.700839  388420 pod_ready.go:104] pod "coredns-66bc5c9577-5q2j4" is not "Ready", error: <nil>
	W1115 10:37:02.202553  388420 pod_ready.go:104] pod "coredns-66bc5c9577-5q2j4" is not "Ready", error: <nil>
	W1115 10:37:04.700511  388420 pod_ready.go:104] pod "coredns-66bc5c9577-5q2j4" is not "Ready", error: <nil>
	W1115 10:37:07.200845  388420 pod_ready.go:104] pod "coredns-66bc5c9577-5q2j4" is not "Ready", error: <nil>
	W1115 10:37:09.701174  388420 pod_ready.go:104] pod "coredns-66bc5c9577-5q2j4" is not "Ready", error: <nil>
	W1115 10:37:12.201848  388420 pod_ready.go:104] pod "coredns-66bc5c9577-5q2j4" is not "Ready", error: <nil>
	W1115 10:37:14.700490  388420 pod_ready.go:104] pod "coredns-66bc5c9577-5q2j4" is not "Ready", error: <nil>
	W1115 10:37:17.200204  388420 pod_ready.go:104] pod "coredns-66bc5c9577-5q2j4" is not "Ready", error: <nil>
	W1115 10:37:19.200721  388420 pod_ready.go:104] pod "coredns-66bc5c9577-5q2j4" is not "Ready", error: <nil>
	W1115 10:37:21.700622  388420 pod_ready.go:104] pod "coredns-66bc5c9577-5q2j4" is not "Ready", error: <nil>
	I1115 10:37:22.700922  388420 pod_ready.go:94] pod "coredns-66bc5c9577-5q2j4" is "Ready"
	I1115 10:37:22.700972  388420 pod_ready.go:86] duration metric: took 38.506067751s for pod "coredns-66bc5c9577-5q2j4" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:37:22.703455  388420 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:37:22.707198  388420 pod_ready.go:94] pod "etcd-default-k8s-diff-port-026691" is "Ready"
	I1115 10:37:22.707224  388420 pod_ready.go:86] duration metric: took 3.746841ms for pod "etcd-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:37:22.709149  388420 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:37:22.712859  388420 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-026691" is "Ready"
	I1115 10:37:22.712878  388420 pod_ready.go:86] duration metric: took 3.701511ms for pod "kube-apiserver-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:37:22.714646  388420 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:37:22.898389  388420 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-026691" is "Ready"
	I1115 10:37:22.898421  388420 pod_ready.go:86] duration metric: took 183.755678ms for pod "kube-controller-manager-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:37:23.100053  388420 pod_ready.go:83] waiting for pod "kube-proxy-c5bw5" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:37:23.498461  388420 pod_ready.go:94] pod "kube-proxy-c5bw5" is "Ready"
	I1115 10:37:23.498490  388420 pod_ready.go:86] duration metric: took 398.410887ms for pod "kube-proxy-c5bw5" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:37:23.700082  388420 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:37:24.099377  388420 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-026691" is "Ready"
	I1115 10:37:24.099410  388420 pod_ready.go:86] duration metric: took 399.303233ms for pod "kube-scheduler-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:37:24.099423  388420 pod_ready.go:40] duration metric: took 39.908043344s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:37:24.145421  388420 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:37:24.147183  388420 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-026691" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 15 10:37:14 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:14.004609762Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:37:14 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:14.004748327Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/bc56cf4ad2338104279fae2ffa4cc6dfcf8114153d42cbd26bac9283ab91bddb/merged/etc/passwd: no such file or directory"
	Nov 15 10:37:14 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:14.004773277Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/bc56cf4ad2338104279fae2ffa4cc6dfcf8114153d42cbd26bac9283ab91bddb/merged/etc/group: no such file or directory"
	Nov 15 10:37:14 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:14.004997265Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:37:14 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:14.035013492Z" level=info msg="Created container c80de6cf1abc0ce1c57707adc801c943013ebf29b303b1b301153401cfb9e7f4: kube-system/storage-provisioner/storage-provisioner" id=88908268-a616-4324-867d-621aa395fb1b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:37:14 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:14.035707379Z" level=info msg="Starting container: c80de6cf1abc0ce1c57707adc801c943013ebf29b303b1b301153401cfb9e7f4" id=c3ce7c4f-4af8-46c3-9c60-27a6d2751901 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:37:14 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:14.037687015Z" level=info msg="Started container" PID=1822 containerID=c80de6cf1abc0ce1c57707adc801c943013ebf29b303b1b301153401cfb9e7f4 description=kube-system/storage-provisioner/storage-provisioner id=c3ce7c4f-4af8-46c3-9c60-27a6d2751901 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0879512bba06072ef6eb046a730037f955ebd87bd96974c51067724cc996fa4f
	Nov 15 10:37:23 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:23.715599535Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:37:23 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:23.720028064Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:37:23 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:23.720052566Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:37:23 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:23.720070105Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:37:23 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:23.723655378Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:37:23 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:23.723686721Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:37:23 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:23.723715483Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:37:23 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:23.727330556Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:37:23 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:23.727352428Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:37:23 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:23.727368911Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:37:23 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:23.730949302Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:37:23 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:23.730987347Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:37:23 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:23.731010013Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:37:23 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:23.73444403Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:37:23 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:23.734470655Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:37:23 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:23.734489535Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:37:23 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:23.738108871Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:37:23 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:23.738128433Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	c80de6cf1abc0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago       Running             storage-provisioner         2                   0879512bba060       storage-provisioner                                    kube-system
	8168cf11cc97a       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           30 seconds ago       Exited              dashboard-metrics-scraper   2                   1541e4c08bf7c       dashboard-metrics-scraper-6ffb444bf9-rtx7l             kubernetes-dashboard
	b7dca0b8853e8       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   42 seconds ago       Running             kubernetes-dashboard        0                   21717aa5a6d69       kubernetes-dashboard-855c9754f9-lnfbf                  kubernetes-dashboard
	c835d06b811af       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           54 seconds ago       Running             kube-proxy                  1                   e19687bbe255d       kube-proxy-c5bw5                                       kube-system
	13394bc9d7c66       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago       Exited              storage-provisioner         1                   0879512bba060       storage-provisioner                                    kube-system
	7066ef5abc4bc       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           54 seconds ago       Running             coredns                     1                   72f03cbe17792       coredns-66bc5c9577-5q2j4                               kube-system
	3ae4905c605ef       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago       Running             busybox                     1                   3106611b956e2       busybox                                                default
	de29b76605c3a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago       Running             kindnet-cni                 1                   38a2086414ec1       kindnet-hjdrk                                          kube-system
	b04411b3a0233       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           About a minute ago   Running             kube-scheduler              1                   69601d853e140       kube-scheduler-default-k8s-diff-port-026691            kube-system
	971b8e4c2073b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           About a minute ago   Running             kube-apiserver              1                   8542ee2316931       kube-apiserver-default-k8s-diff-port-026691            kube-system
	6a1db649ea51d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           About a minute ago   Running             kube-controller-manager     1                   e716f86b369cd       kube-controller-manager-default-k8s-diff-port-026691   kube-system
	58595dd2cf4ce       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           About a minute ago   Running             etcd                        1                   0abfd59608c32       etcd-default-k8s-diff-port-026691                      kube-system
	
	
	==> coredns [7066ef5abc4bc0c6c62f762a419ff0ace9bbf240ada62fbf94eea91e68213566] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57953 - 19015 "HINFO IN 7049713276823466735.5464609015533187721. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.016127809s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-026691
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-026691
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=default-k8s-diff-port-026691
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_35_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:35:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-026691
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:37:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:37:23 +0000   Sat, 15 Nov 2025 10:35:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:37:23 +0000   Sat, 15 Nov 2025 10:35:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:37:23 +0000   Sat, 15 Nov 2025 10:35:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:37:23 +0000   Sat, 15 Nov 2025 10:36:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-026691
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                cb07002a-423d-4a10-9a8e-bf05fe259209
	  Boot ID:                    6a33f504-3a8c-4d49-8fc5-301cf5275efc
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-5q2j4                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m18s
	  kube-system                 etcd-default-k8s-diff-port-026691                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m23s
	  kube-system                 kindnet-hjdrk                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m18s
	  kube-system                 kube-apiserver-default-k8s-diff-port-026691             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-026691    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-proxy-c5bw5                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-scheduler-default-k8s-diff-port-026691             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-rtx7l              0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-lnfbf                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m17s                  kube-proxy       
	  Normal   Starting                 54s                    kube-proxy       
	  Normal   Starting                 2m30s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m30s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m29s (x8 over 2m30s)  kubelet          Node default-k8s-diff-port-026691 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m29s (x8 over 2m30s)  kubelet          Node default-k8s-diff-port-026691 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m29s (x8 over 2m30s)  kubelet          Node default-k8s-diff-port-026691 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m23s                  kubelet          Node default-k8s-diff-port-026691 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m23s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m23s                  kubelet          Node default-k8s-diff-port-026691 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m23s                  kubelet          Node default-k8s-diff-port-026691 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m23s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m19s                  node-controller  Node default-k8s-diff-port-026691 event: Registered Node default-k8s-diff-port-026691 in Controller
	  Normal   NodeReady                97s                    kubelet          Node default-k8s-diff-port-026691 status is now: NodeReady
	  Normal   Starting                 61s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s (x8 over 61s)      kubelet          Node default-k8s-diff-port-026691 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s (x8 over 61s)      kubelet          Node default-k8s-diff-port-026691 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s (x8 over 61s)      kubelet          Node default-k8s-diff-port-026691 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           53s                    node-controller  Node default-k8s-diff-port-026691 event: Registered Node default-k8s-diff-port-026691 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 7f 53 f9 15 93 08 06
	[  +0.017821] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff c6 66 25 e9 45 28 08 06
	[ +19.178294] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 a3 65 b9 28 15 08 06
	[  +0.347929] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 6d df 33 a5 ad 08 06
	[  +0.000319] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff a2 7f 53 f9 15 93 08 06
	[Nov15 10:33] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 96 ed 10 e8 9a 80 08 06
	[  +0.000481] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 a3 65 b9 28 15 08 06
	[ +35.214217] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 c2 9a 56 52 0c 08 06
	[  +9.104720] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 c2 c4 d1 7c dd 08 06
	[Nov15 10:34] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 68 1e 80 53 05 08 06
	[  +0.000444] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 c2 c4 d1 7c dd 08 06
	[ +18.836046] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 98 db 78 4f 0b 08 06
	[  +0.000708] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 c2 9a 56 52 0c 08 06
	
	
	==> etcd [58595dd2cf4ce1cb8f740a6594f3ee7a3c7d2587682b4e6c526266c0067303a7] <==
	{"level":"warn","ts":"2025-11-15T10:36:40.682609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:40.747329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:40.757934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:40.766589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:40.773794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:40.781907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:40.791044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:40.845440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:40.855152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:40.864761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:40.871212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:40.879161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:40.885852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:40.945550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:40.954602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:40.963938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:40.973480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:40.981458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:40.987505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:40.993865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:41.048852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:41.066044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:41.074303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:41.080618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:41.186388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38196","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:37:38 up  2:19,  0 user,  load average: 1.77, 3.62, 2.67
	Linux default-k8s-diff-port-026691 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [de29b76605c3aef2cd62d1da1ab7845a60a8a7dbe6ba39ecfdbf9ae60a3a31d8] <==
	I1115 10:36:43.444727       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:36:43.445023       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1115 10:36:43.445247       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:36:43.445266       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:36:43.445292       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:36:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:36:43.714933       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:36:43.741851       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:36:43.742181       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1115 10:36:43.742212       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	E1115 10:37:13.716427       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1115 10:37:13.716492       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1115 10:37:13.742649       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1115 10:37:13.743767       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1115 10:37:15.043284       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:37:15.043323       1 metrics.go:72] Registering metrics
	I1115 10:37:15.043430       1 controller.go:711] "Syncing nftables rules"
	I1115 10:37:23.715281       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 10:37:23.715361       1 main.go:301] handling current node
	I1115 10:37:33.715509       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 10:37:33.715560       1 main.go:301] handling current node
	
	
	==> kube-apiserver [971b8e4c2073b59c0dd66594f19cfc1d0b9f81cb1bad24b21946d5ea7b012768] <==
	I1115 10:36:42.246198       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 10:36:42.246205       1 cache.go:39] Caches are synced for autoregister controller
	I1115 10:36:42.246345       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1115 10:36:42.246382       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1115 10:36:42.246471       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1115 10:36:42.246492       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1115 10:36:42.246641       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 10:36:42.246719       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1115 10:36:42.246727       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1115 10:36:42.246728       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1115 10:36:42.254385       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1115 10:36:42.254517       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1115 10:36:42.259556       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1115 10:36:42.748090       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 10:36:42.895642       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 10:36:42.973937       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 10:36:43.045724       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:36:43.057747       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:36:43.071656       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:36:43.269659       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.34.16"}
	I1115 10:36:43.349124       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.149.205"}
	I1115 10:36:45.705040       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 10:36:46.004858       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 10:36:46.004858       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 10:36:46.054439       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [6a1db649ea51d34c5661db65312d1c8660828376983f865ce0b3d2801b219c2d] <==
	I1115 10:36:45.451278       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1115 10:36:45.451472       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1115 10:36:45.451291       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 10:36:45.451542       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-026691"
	I1115 10:36:45.451333       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1115 10:36:45.451609       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1115 10:36:45.451343       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1115 10:36:45.451316       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1115 10:36:45.452992       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1115 10:36:45.453644       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1115 10:36:45.455153       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 10:36:45.455425       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:36:45.455488       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1115 10:36:45.455491       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 10:36:45.455633       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1115 10:36:45.455504       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 10:36:45.457014       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1115 10:36:45.457882       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1115 10:36:45.462117       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1115 10:36:45.466020       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1115 10:36:45.467216       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1115 10:36:45.470651       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:36:45.503580       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:36:45.503601       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 10:36:45.503610       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [c835d06b811afe7277524798594204743e6b5c98eb025ff53b5a2bbdf7a96794] <==
	I1115 10:36:43.585537       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:36:43.711194       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:36:43.811824       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:36:43.811887       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1115 10:36:43.812030       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:36:43.830921       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:36:43.831005       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:36:43.836768       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:36:43.837251       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:36:43.837288       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:36:43.839185       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:36:43.839209       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:36:43.839274       1 config.go:200] "Starting service config controller"
	I1115 10:36:43.839644       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:36:43.839671       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:36:43.839685       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:36:43.839814       1 config.go:309] "Starting node config controller"
	I1115 10:36:43.839847       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:36:43.839855       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:36:43.939411       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 10:36:43.940138       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 10:36:43.940149       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [b04411b3a023360282fda35601339f2000829b38e465e5c4c130b7c58e111bb3] <==
	I1115 10:36:40.077537       1 serving.go:386] Generated self-signed cert in-memory
	W1115 10:36:42.146529       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1115 10:36:42.146566       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1115 10:36:42.146579       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1115 10:36:42.146589       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1115 10:36:42.247541       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1115 10:36:42.247777       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:36:42.252142       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:36:42.252199       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:36:42.252850       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 10:36:42.252920       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 10:36:42.353299       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 10:36:46 default-k8s-diff-port-026691 kubelet[840]: I1115 10:36:46.068659     840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/230beb1a-4842-4cb2-b64f-07d59686ef2c-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-lnfbf\" (UID: \"230beb1a-4842-4cb2-b64f-07d59686ef2c\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lnfbf"
	Nov 15 10:36:46 default-k8s-diff-port-026691 kubelet[840]: W1115 10:36:46.242882     840 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/acb25a518a850a99b04a9ecc3ee276eeb45082aa05801ad29c50dc8761212798/crio-1541e4c08bf7c68bff0c2d52ab6dccdfab97bf5475884f904c0f2ee819a77479 WatchSource:0}: Error finding container 1541e4c08bf7c68bff0c2d52ab6dccdfab97bf5475884f904c0f2ee819a77479: Status 404 returned error can't find the container with id 1541e4c08bf7c68bff0c2d52ab6dccdfab97bf5475884f904c0f2ee819a77479
	Nov 15 10:36:46 default-k8s-diff-port-026691 kubelet[840]: W1115 10:36:46.252112     840 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/acb25a518a850a99b04a9ecc3ee276eeb45082aa05801ad29c50dc8761212798/crio-21717aa5a6d697df70c9e71b74b7d7b24dc009e4f909506454bb0837c2d99722 WatchSource:0}: Error finding container 21717aa5a6d697df70c9e71b74b7d7b24dc009e4f909506454bb0837c2d99722: Status 404 returned error can't find the container with id 21717aa5a6d697df70c9e71b74b7d7b24dc009e4f909506454bb0837c2d99722
	Nov 15 10:36:49 default-k8s-diff-port-026691 kubelet[840]: I1115 10:36:49.884137     840 scope.go:117] "RemoveContainer" containerID="11cf4e27ed86db07329d5e4d3a9ba83f4cb58b48af47eb26166bbf4b7788089e"
	Nov 15 10:36:50 default-k8s-diff-port-026691 kubelet[840]: I1115 10:36:50.889226     840 scope.go:117] "RemoveContainer" containerID="11cf4e27ed86db07329d5e4d3a9ba83f4cb58b48af47eb26166bbf4b7788089e"
	Nov 15 10:36:50 default-k8s-diff-port-026691 kubelet[840]: I1115 10:36:50.889401     840 scope.go:117] "RemoveContainer" containerID="d137bb0de6482bc21bf26ec2b5a8cf6fcbae6e8f37c1ee370da009a0dcbdb52d"
	Nov 15 10:36:50 default-k8s-diff-port-026691 kubelet[840]: E1115 10:36:50.889590     840 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rtx7l_kubernetes-dashboard(16880d02-ce40-4d30-8524-6f66aa5404f5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rtx7l" podUID="16880d02-ce40-4d30-8524-6f66aa5404f5"
	Nov 15 10:36:51 default-k8s-diff-port-026691 kubelet[840]: I1115 10:36:51.894021     840 scope.go:117] "RemoveContainer" containerID="d137bb0de6482bc21bf26ec2b5a8cf6fcbae6e8f37c1ee370da009a0dcbdb52d"
	Nov 15 10:36:51 default-k8s-diff-port-026691 kubelet[840]: E1115 10:36:51.894427     840 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rtx7l_kubernetes-dashboard(16880d02-ce40-4d30-8524-6f66aa5404f5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rtx7l" podUID="16880d02-ce40-4d30-8524-6f66aa5404f5"
	Nov 15 10:36:52 default-k8s-diff-port-026691 kubelet[840]: I1115 10:36:52.194230     840 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 15 10:36:53 default-k8s-diff-port-026691 kubelet[840]: I1115 10:36:53.363470     840 scope.go:117] "RemoveContainer" containerID="d137bb0de6482bc21bf26ec2b5a8cf6fcbae6e8f37c1ee370da009a0dcbdb52d"
	Nov 15 10:36:53 default-k8s-diff-port-026691 kubelet[840]: E1115 10:36:53.363704     840 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rtx7l_kubernetes-dashboard(16880d02-ce40-4d30-8524-6f66aa5404f5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rtx7l" podUID="16880d02-ce40-4d30-8524-6f66aa5404f5"
	Nov 15 10:36:55 default-k8s-diff-port-026691 kubelet[840]: I1115 10:36:55.960798     840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lnfbf" podStartSLOduration=1.908473374 podStartE2EDuration="10.960772714s" podCreationTimestamp="2025-11-15 10:36:45 +0000 UTC" firstStartedPulling="2025-11-15 10:36:46.254673472 +0000 UTC m=+8.642502283" lastFinishedPulling="2025-11-15 10:36:55.306972823 +0000 UTC m=+17.694801623" observedRunningTime="2025-11-15 10:36:55.960417775 +0000 UTC m=+18.348246594" watchObservedRunningTime="2025-11-15 10:36:55.960772714 +0000 UTC m=+18.348601533"
	Nov 15 10:37:07 default-k8s-diff-port-026691 kubelet[840]: I1115 10:37:07.757450     840 scope.go:117] "RemoveContainer" containerID="d137bb0de6482bc21bf26ec2b5a8cf6fcbae6e8f37c1ee370da009a0dcbdb52d"
	Nov 15 10:37:07 default-k8s-diff-port-026691 kubelet[840]: I1115 10:37:07.980434     840 scope.go:117] "RemoveContainer" containerID="d137bb0de6482bc21bf26ec2b5a8cf6fcbae6e8f37c1ee370da009a0dcbdb52d"
	Nov 15 10:37:07 default-k8s-diff-port-026691 kubelet[840]: I1115 10:37:07.980665     840 scope.go:117] "RemoveContainer" containerID="8168cf11cc97ab90349686f8d781936b53a8680baa790ca9124eb4fee99df98d"
	Nov 15 10:37:07 default-k8s-diff-port-026691 kubelet[840]: E1115 10:37:07.980886     840 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rtx7l_kubernetes-dashboard(16880d02-ce40-4d30-8524-6f66aa5404f5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rtx7l" podUID="16880d02-ce40-4d30-8524-6f66aa5404f5"
	Nov 15 10:37:13 default-k8s-diff-port-026691 kubelet[840]: I1115 10:37:13.363598     840 scope.go:117] "RemoveContainer" containerID="8168cf11cc97ab90349686f8d781936b53a8680baa790ca9124eb4fee99df98d"
	Nov 15 10:37:13 default-k8s-diff-port-026691 kubelet[840]: E1115 10:37:13.363788     840 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rtx7l_kubernetes-dashboard(16880d02-ce40-4d30-8524-6f66aa5404f5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rtx7l" podUID="16880d02-ce40-4d30-8524-6f66aa5404f5"
	Nov 15 10:37:13 default-k8s-diff-port-026691 kubelet[840]: I1115 10:37:13.997322     840 scope.go:117] "RemoveContainer" containerID="13394bc9d7c6680728c8c7f5b7c939c8bf8ddf701e93585d0d249b4debb8779d"
	Nov 15 10:37:27 default-k8s-diff-port-026691 kubelet[840]: I1115 10:37:27.757987     840 scope.go:117] "RemoveContainer" containerID="8168cf11cc97ab90349686f8d781936b53a8680baa790ca9124eb4fee99df98d"
	Nov 15 10:37:27 default-k8s-diff-port-026691 kubelet[840]: E1115 10:37:27.758244     840 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rtx7l_kubernetes-dashboard(16880d02-ce40-4d30-8524-6f66aa5404f5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rtx7l" podUID="16880d02-ce40-4d30-8524-6f66aa5404f5"
	Nov 15 10:37:36 default-k8s-diff-port-026691 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 10:37:36 default-k8s-diff-port-026691 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 10:37:36 default-k8s-diff-port-026691 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [b7dca0b8853e890d3a900bd3933f60ce1727d329f35d122cf14fd332ab681fb0] <==
	2025/11/15 10:36:55 Starting overwatch
	2025/11/15 10:36:55 Using namespace: kubernetes-dashboard
	2025/11/15 10:36:55 Using in-cluster config to connect to apiserver
	2025/11/15 10:36:55 Using secret token for csrf signing
	2025/11/15 10:36:55 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/15 10:36:55 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/15 10:36:55 Successful initial request to the apiserver, version: v1.34.1
	2025/11/15 10:36:55 Generating JWE encryption key
	2025/11/15 10:36:55 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/15 10:36:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/15 10:36:55 Initializing JWE encryption key from synchronized object
	2025/11/15 10:36:55 Creating in-cluster Sidecar client
	2025/11/15 10:36:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 10:36:55 Serving insecurely on HTTP port: 9090
	2025/11/15 10:37:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [13394bc9d7c6680728c8c7f5b7c939c8bf8ddf701e93585d0d249b4debb8779d] <==
	I1115 10:36:43.552239       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1115 10:37:13.556514       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [c80de6cf1abc0ce1c57707adc801c943013ebf29b303b1b301153401cfb9e7f4] <==
	I1115 10:37:14.050452       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 10:37:14.058749       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 10:37:14.058792       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1115 10:37:14.060926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:17.515637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:21.776464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:25.374895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:28.428826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:31.451139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:31.455380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:37:31.455523       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 10:37:31.455609       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e5b7cf19-8a06-483d-895a-a97445d789b0", APIVersion:"v1", ResourceVersion:"687", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-026691_09f8484c-9adc-4192-b44b-479815d28210 became leader
	I1115 10:37:31.455657       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-026691_09f8484c-9adc-4192-b44b-479815d28210!
	W1115 10:37:31.457391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:31.460850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:37:31.555909       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-026691_09f8484c-9adc-4192-b44b-479815d28210!
	W1115 10:37:33.463564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:33.467924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:35.470761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:35.474935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:37.478529       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:37.483343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-026691 -n default-k8s-diff-port-026691
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-026691 -n default-k8s-diff-port-026691: exit status 2 (318.092828ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-026691 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-026691
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-026691:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "acb25a518a850a99b04a9ecc3ee276eeb45082aa05801ad29c50dc8761212798",
	        "Created": "2025-11-15T10:34:56.785604479Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 388621,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:36:31.424936822Z",
	            "FinishedAt": "2025-11-15T10:36:30.544090667Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/acb25a518a850a99b04a9ecc3ee276eeb45082aa05801ad29c50dc8761212798/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/acb25a518a850a99b04a9ecc3ee276eeb45082aa05801ad29c50dc8761212798/hostname",
	        "HostsPath": "/var/lib/docker/containers/acb25a518a850a99b04a9ecc3ee276eeb45082aa05801ad29c50dc8761212798/hosts",
	        "LogPath": "/var/lib/docker/containers/acb25a518a850a99b04a9ecc3ee276eeb45082aa05801ad29c50dc8761212798/acb25a518a850a99b04a9ecc3ee276eeb45082aa05801ad29c50dc8761212798-json.log",
	        "Name": "/default-k8s-diff-port-026691",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-026691:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-026691",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "acb25a518a850a99b04a9ecc3ee276eeb45082aa05801ad29c50dc8761212798",
	                "LowerDir": "/var/lib/docker/overlay2/330082c0af87d9272d4fc35061c9dcf149c94a075cbe50833b0301c28218115b-init/diff:/var/lib/docker/overlay2/507c85b6fb43d9c98216fac79d7a8d08dd20b2a63d0fbfc758336e5d38c04044/diff",
	                "MergedDir": "/var/lib/docker/overlay2/330082c0af87d9272d4fc35061c9dcf149c94a075cbe50833b0301c28218115b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/330082c0af87d9272d4fc35061c9dcf149c94a075cbe50833b0301c28218115b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/330082c0af87d9272d4fc35061c9dcf149c94a075cbe50833b0301c28218115b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-026691",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-026691/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-026691",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-026691",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-026691",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "0ee50bddc8180ffef82c9f1d4c30dce0a82dd04cbdf3ae2c6ad4b2dd0e9c09ca",
	            "SandboxKey": "/var/run/docker/netns/0ee50bddc818",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-026691": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a057ad05bea093d4f46407b93bd0d97f5f0b4004a2f1151b31de55e2e2a06fb7",
	                    "EndpointID": "95ae4a6178ed12ab94e3095f2e9d937b033e8f61777d2b6ba2953a3a6a79f9ec",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "92:be:c7:10:04:57",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-026691",
	                        "acb25a518a85"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-026691 -n default-k8s-diff-port-026691
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-026691 -n default-k8s-diff-port-026691: exit status 2 (313.409844ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-026691 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-026691 logs -n 25: (1.064592027s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p embed-certs-719574 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:36 UTC │
	│ delete  │ -p old-k8s-version-087235                                                                                                                                                                                                                     │ old-k8s-version-087235       │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ start   │ -p newest-cni-086099 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:36 UTC │
	│ image   │ no-preload-283677 image list --format=json                                                                                                                                                                                                    │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ pause   │ -p no-preload-283677 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ delete  │ -p no-preload-283677                                                                                                                                                                                                                          │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ delete  │ -p no-preload-283677                                                                                                                                                                                                                          │ no-preload-283677            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-026691 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-026691 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-026691 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-026691 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ addons  │ enable metrics-server -p newest-cni-086099 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ stop    │ -p newest-cni-086099 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ addons  │ enable dashboard -p newest-cni-086099 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ start   │ -p newest-cni-086099 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-026691 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-026691 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ start   │ -p default-k8s-diff-port-026691 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-026691 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:37 UTC │
	│ image   │ newest-cni-086099 image list --format=json                                                                                                                                                                                                    │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ pause   │ -p newest-cni-086099 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ delete  │ -p newest-cni-086099                                                                                                                                                                                                                          │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ delete  │ -p newest-cni-086099                                                                                                                                                                                                                          │ newest-cni-086099            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ image   │ embed-certs-719574 image list --format=json                                                                                                                                                                                                   │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ pause   │ -p embed-certs-719574 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ delete  │ -p embed-certs-719574                                                                                                                                                                                                                         │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ delete  │ -p embed-certs-719574                                                                                                                                                                                                                         │ embed-certs-719574           │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:37 UTC │
	│ image   │ default-k8s-diff-port-026691 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-026691 │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:37 UTC │
	│ pause   │ -p default-k8s-diff-port-026691 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-026691 │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:36:31
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:36:31.193182  388420 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:36:31.193281  388420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:36:31.193289  388420 out.go:374] Setting ErrFile to fd 2...
	I1115 10:36:31.193293  388420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:36:31.193515  388420 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 10:36:31.193933  388420 out.go:368] Setting JSON to false
	I1115 10:36:31.195111  388420 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8328,"bootTime":1763194663,"procs":266,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 10:36:31.195216  388420 start.go:143] virtualization: kvm guest
	I1115 10:36:31.196894  388420 out.go:179] * [default-k8s-diff-port-026691] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 10:36:31.198076  388420 notify.go:221] Checking for updates...
	I1115 10:36:31.198087  388420 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 10:36:31.199249  388420 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:36:31.200471  388420 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:36:31.201512  388420 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-55448/.minikube
	I1115 10:36:31.202449  388420 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 10:36:31.203634  388420 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:36:31.205205  388420 config.go:182] Loaded profile config "default-k8s-diff-port-026691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:31.205718  388420 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:36:31.228892  388420 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 10:36:31.229044  388420 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:36:31.285898  388420 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:67 SystemTime:2025-11-15 10:36:31.276283811 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:36:31.286032  388420 docker.go:319] overlay module found
	I1115 10:36:31.287655  388420 out.go:179] * Using the docker driver based on existing profile
	I1115 10:36:31.288859  388420 start.go:309] selected driver: docker
	I1115 10:36:31.288877  388420 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-026691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-026691 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:36:31.288972  388420 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:36:31.289812  388420 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:36:31.352009  388420 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:66 SystemTime:2025-11-15 10:36:31.342104199 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:36:31.352371  388420 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:36:31.352408  388420 cni.go:84] Creating CNI manager for ""
	I1115 10:36:31.352457  388420 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:36:31.352498  388420 start.go:353] cluster config:
	{Name:default-k8s-diff-port-026691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-026691 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:36:31.354418  388420 out.go:179] * Starting "default-k8s-diff-port-026691" primary control-plane node in "default-k8s-diff-port-026691" cluster
	I1115 10:36:31.355595  388420 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:36:31.356825  388420 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:36:31.357856  388420 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:36:31.357890  388420 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1115 10:36:31.357905  388420 cache.go:65] Caching tarball of preloaded images
	I1115 10:36:31.357944  388420 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:36:31.358020  388420 preload.go:238] Found /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 10:36:31.358036  388420 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:36:31.358136  388420 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/config.json ...
	I1115 10:36:31.378843  388420 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:36:31.378864  388420 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:36:31.378881  388420 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:36:31.378904  388420 start.go:360] acquireMachinesLock for default-k8s-diff-port-026691: {Name:mk1f3196dd9a24a043fa707553211d0b0ea8c1f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:36:31.378986  388420 start.go:364] duration metric: took 61.257µs to acquireMachinesLock for "default-k8s-diff-port-026691"
	I1115 10:36:31.379010  388420 start.go:96] Skipping create...Using existing machine configuration
	I1115 10:36:31.379018  388420 fix.go:54] fixHost starting: 
	I1115 10:36:31.379252  388420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:36:31.397025  388420 fix.go:112] recreateIfNeeded on default-k8s-diff-port-026691: state=Stopped err=<nil>
	W1115 10:36:31.397068  388420 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 10:36:29.135135  387591 out.go:252] * Restarting existing docker container for "newest-cni-086099" ...
	I1115 10:36:29.135222  387591 cli_runner.go:164] Run: docker start newest-cni-086099
	I1115 10:36:29.412428  387591 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:36:29.431258  387591 kic.go:430] container "newest-cni-086099" state is running.
	I1115 10:36:29.431760  387591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-086099
	I1115 10:36:29.450271  387591 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/config.json ...
	I1115 10:36:29.450487  387591 machine.go:94] provisionDockerMachine start ...
	I1115 10:36:29.450542  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:29.468796  387591 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:29.469141  387591 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1115 10:36:29.469158  387591 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:36:29.469768  387591 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43374->127.0.0.1:33129: read: connection reset by peer
	I1115 10:36:32.597021  387591 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-086099
	
	I1115 10:36:32.597063  387591 ubuntu.go:182] provisioning hostname "newest-cni-086099"
	I1115 10:36:32.597140  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:32.616934  387591 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:32.617209  387591 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1115 10:36:32.617233  387591 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-086099 && echo "newest-cni-086099" | sudo tee /etc/hostname
	I1115 10:36:32.756237  387591 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-086099
	
	I1115 10:36:32.756329  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:32.775168  387591 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:32.775389  387591 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1115 10:36:32.775405  387591 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-086099' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-086099/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-086099' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:36:32.902668  387591 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:36:32.902701  387591 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-55448/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-55448/.minikube}
	I1115 10:36:32.902736  387591 ubuntu.go:190] setting up certificates
	I1115 10:36:32.902754  387591 provision.go:84] configureAuth start
	I1115 10:36:32.902811  387591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-086099
	I1115 10:36:32.921923  387591 provision.go:143] copyHostCerts
	I1115 10:36:32.922017  387591 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem, removing ...
	I1115 10:36:32.922035  387591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem
	I1115 10:36:32.922102  387591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem (1082 bytes)
	I1115 10:36:32.922216  387591 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem, removing ...
	I1115 10:36:32.922225  387591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem
	I1115 10:36:32.922253  387591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem (1123 bytes)
	I1115 10:36:32.922341  387591 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem, removing ...
	I1115 10:36:32.922348  387591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem
	I1115 10:36:32.922372  387591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem (1679 bytes)
	I1115 10:36:32.922421  387591 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem org=jenkins.newest-cni-086099 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-086099]
	I1115 10:36:32.940854  387591 provision.go:177] copyRemoteCerts
	I1115 10:36:32.940914  387591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:36:32.940948  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:32.958931  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:33.053731  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:36:33.071243  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:36:33.088651  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1115 10:36:33.105219  387591 provision.go:87] duration metric: took 202.453369ms to configureAuth
	I1115 10:36:33.105244  387591 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:36:33.105414  387591 config.go:182] Loaded profile config "newest-cni-086099": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:33.105509  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:33.123012  387591 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:33.123259  387591 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1115 10:36:33.123277  387591 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:36:33.389799  387591 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:36:33.389822  387591 machine.go:97] duration metric: took 3.93932207s to provisionDockerMachine
	I1115 10:36:33.389835  387591 start.go:293] postStartSetup for "newest-cni-086099" (driver="docker")
	I1115 10:36:33.389844  387591 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:36:33.389903  387591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:36:33.389946  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:33.409403  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:33.503330  387591 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:36:33.506790  387591 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:36:33.506815  387591 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:36:33.506825  387591 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/addons for local assets ...
	I1115 10:36:33.506878  387591 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/files for local assets ...
	I1115 10:36:33.506995  387591 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem -> 589622.pem in /etc/ssl/certs
	I1115 10:36:33.507126  387591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:36:33.514570  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:36:33.531880  387591 start.go:296] duration metric: took 142.028023ms for postStartSetup
	I1115 10:36:33.532012  387591 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:36:33.532066  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:33.549908  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:33.640348  387591 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:36:33.645124  387591 fix.go:56] duration metric: took 4.529931109s for fixHost
	I1115 10:36:33.645164  387591 start.go:83] releasing machines lock for "newest-cni-086099", held for 4.529982501s
	I1115 10:36:33.645246  387591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-086099
	I1115 10:36:33.663364  387591 ssh_runner.go:195] Run: cat /version.json
	I1115 10:36:33.663400  387591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:36:33.663445  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:33.663461  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:33.682200  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:33.682521  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:33.827221  387591 ssh_runner.go:195] Run: systemctl --version
	I1115 10:36:33.834019  387591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:36:33.868151  387591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:36:33.872995  387591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:36:33.873067  387591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:36:33.881540  387591 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 10:36:33.881563  387591 start.go:496] detecting cgroup driver to use...
	I1115 10:36:33.881595  387591 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:36:33.881628  387591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:36:33.895704  387591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:36:33.907633  387591 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:36:33.907681  387591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:36:33.921408  387591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	W1115 10:36:30.745845  377744 pod_ready.go:104] pod "coredns-66bc5c9577-fjzk5" is not "Ready", error: <nil>
	W1115 10:36:32.746544  377744 pod_ready.go:104] pod "coredns-66bc5c9577-fjzk5" is not "Ready", error: <nil>
	I1115 10:36:33.933689  387591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:36:34.015025  387591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:36:34.097166  387591 docker.go:234] disabling docker service ...
	I1115 10:36:34.097250  387591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:36:34.111501  387591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:36:34.123898  387591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:36:34.208076  387591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:36:34.289077  387591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:36:34.302010  387591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:36:34.316333  387591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:36:34.316409  387591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.325113  387591 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:36:34.325175  387591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.333844  387591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.342343  387591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.350817  387591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:36:34.359269  387591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.368008  387591 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.376100  387591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:34.384822  387591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:36:34.392091  387591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:36:34.399149  387591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:34.478616  387591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:36:34.580323  387591 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:36:34.580408  387591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:36:34.584509  387591 start.go:564] Will wait 60s for crictl version
	I1115 10:36:34.584568  387591 ssh_runner.go:195] Run: which crictl
	I1115 10:36:34.588078  387591 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:36:34.613070  387591 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:36:34.613150  387591 ssh_runner.go:195] Run: crio --version
	I1115 10:36:34.641080  387591 ssh_runner.go:195] Run: crio --version
	I1115 10:36:34.670335  387591 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:36:34.671690  387591 cli_runner.go:164] Run: docker network inspect newest-cni-086099 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:36:34.689678  387591 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1115 10:36:34.693973  387591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:36:34.705342  387591 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1115 10:36:31.398937  388420 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-026691" ...
	I1115 10:36:31.399016  388420 cli_runner.go:164] Run: docker start default-k8s-diff-port-026691
	I1115 10:36:31.676189  388420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:36:31.694382  388420 kic.go:430] container "default-k8s-diff-port-026691" state is running.
	I1115 10:36:31.694751  388420 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-026691
	I1115 10:36:31.713425  388420 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/config.json ...
	I1115 10:36:31.713652  388420 machine.go:94] provisionDockerMachine start ...
	I1115 10:36:31.713746  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:31.732991  388420 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:31.733252  388420 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1115 10:36:31.733277  388420 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:36:31.734038  388420 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45950->127.0.0.1:33134: read: connection reset by peer
	I1115 10:36:34.867843  388420 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-026691
	
	I1115 10:36:34.867883  388420 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-026691"
	I1115 10:36:34.868072  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:34.887800  388420 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:34.888079  388420 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1115 10:36:34.888098  388420 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-026691 && echo "default-k8s-diff-port-026691" | sudo tee /etc/hostname
	I1115 10:36:35.027312  388420 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-026691
	
	I1115 10:36:35.027402  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:35.049307  388420 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:35.049620  388420 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1115 10:36:35.049653  388420 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-026691' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-026691/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-026691' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:36:35.185792  388420 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:36:35.185824  388420 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-55448/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-55448/.minikube}
	I1115 10:36:35.185877  388420 ubuntu.go:190] setting up certificates
	I1115 10:36:35.185889  388420 provision.go:84] configureAuth start
	I1115 10:36:35.185975  388420 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-026691
	I1115 10:36:35.205215  388420 provision.go:143] copyHostCerts
	I1115 10:36:35.205302  388420 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem, removing ...
	I1115 10:36:35.205325  388420 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem
	I1115 10:36:35.205419  388420 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/ca.pem (1082 bytes)
	I1115 10:36:35.205578  388420 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem, removing ...
	I1115 10:36:35.205600  388420 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem
	I1115 10:36:35.205648  388420 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/cert.pem (1123 bytes)
	I1115 10:36:35.205811  388420 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem, removing ...
	I1115 10:36:35.205831  388420 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem
	I1115 10:36:35.205877  388420 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-55448/.minikube/key.pem (1679 bytes)
	I1115 10:36:35.205988  388420 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-026691 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-026691 localhost minikube]
	I1115 10:36:35.356382  388420 provision.go:177] copyRemoteCerts
	I1115 10:36:35.356441  388420 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:36:35.356476  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:35.375752  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:35.470476  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:36:35.488150  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1115 10:36:35.505264  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:36:35.522854  388420 provision.go:87] duration metric: took 336.947608ms to configureAuth
	I1115 10:36:35.522880  388420 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:36:35.523120  388420 config.go:182] Loaded profile config "default-k8s-diff-port-026691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:35.523282  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:35.543167  388420 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:35.543480  388420 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1115 10:36:35.543509  388420 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:36:35.848476  388420 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:36:35.848509  388420 machine.go:97] duration metric: took 4.134839636s to provisionDockerMachine
	I1115 10:36:35.848525  388420 start.go:293] postStartSetup for "default-k8s-diff-port-026691" (driver="docker")
	I1115 10:36:35.848541  388420 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:36:35.848616  388420 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:36:35.848671  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:35.868537  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:35.963605  388420 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:36:35.967175  388420 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:36:35.967199  388420 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:36:35.967209  388420 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/addons for local assets ...
	I1115 10:36:35.967263  388420 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-55448/.minikube/files for local assets ...
	I1115 10:36:35.967339  388420 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem -> 589622.pem in /etc/ssl/certs
	I1115 10:36:35.967422  388420 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:36:35.975404  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:36:35.992754  388420 start.go:296] duration metric: took 144.211835ms for postStartSetup
	I1115 10:36:35.992851  388420 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:36:35.992902  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:36.010853  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:36.106652  388420 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:36:36.111301  388420 fix.go:56] duration metric: took 4.732276816s for fixHost
	I1115 10:36:36.111327  388420 start.go:83] releasing machines lock for "default-k8s-diff-port-026691", held for 4.732326241s
	I1115 10:36:36.111401  388420 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-026691
	I1115 10:36:36.133087  388420 ssh_runner.go:195] Run: cat /version.json
	I1115 10:36:36.133147  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:36.133224  388420 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:36:36.133295  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:36.161597  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:36.162169  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:34.706341  387591 kubeadm.go:884] updating cluster {Name:newest-cni-086099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-086099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:36:34.706463  387591 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:36:34.706520  387591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:36:34.737832  387591 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:36:34.737871  387591 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:36:34.737929  387591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:36:34.765628  387591 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:36:34.765650  387591 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:36:34.765657  387591 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1115 10:36:34.765750  387591 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-086099 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-086099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:36:34.765813  387591 ssh_runner.go:195] Run: crio config
	I1115 10:36:34.812764  387591 cni.go:84] Creating CNI manager for ""
	I1115 10:36:34.812787  387591 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:36:34.812806  387591 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1115 10:36:34.812836  387591 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-086099 NodeName:newest-cni-086099 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:36:34.813018  387591 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-086099"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:36:34.813097  387591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:36:34.821514  387591 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:36:34.821582  387591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:36:34.829425  387591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1115 10:36:34.841803  387591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:36:34.854099  387591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1115 10:36:34.867123  387591 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:36:34.871300  387591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:36:34.882157  387591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:34.965624  387591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:36:34.991396  387591 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099 for IP: 192.168.103.2
	I1115 10:36:34.991421  387591 certs.go:195] generating shared ca certs ...
	I1115 10:36:34.991442  387591 certs.go:227] acquiring lock for ca certs: {Name:mkec8c0ab2b564f4fafe1f7fc86089efa994f6db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:34.991611  387591 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key
	I1115 10:36:34.991670  387591 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key
	I1115 10:36:34.991685  387591 certs.go:257] generating profile certs ...
	I1115 10:36:34.991800  387591 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/client.key
	I1115 10:36:34.991881  387591 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.key.d719cdad
	I1115 10:36:34.991938  387591 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.key
	I1115 10:36:34.992114  387591 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem (1338 bytes)
	W1115 10:36:34.992160  387591 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962_empty.pem, impossibly tiny 0 bytes
	I1115 10:36:34.992182  387591 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:36:34.992223  387591 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:36:34.992266  387591 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:36:34.992298  387591 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem (1679 bytes)
	I1115 10:36:34.992360  387591 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:36:34.993060  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:36:35.012346  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:36:35.032525  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:36:35.052616  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:36:35.116969  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1115 10:36:35.141400  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 10:36:35.160318  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:36:35.178367  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/newest-cni-086099/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:36:35.231343  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /usr/share/ca-certificates/589622.pem (1708 bytes)
	I1115 10:36:35.251073  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:36:35.269574  387591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem --> /usr/share/ca-certificates/58962.pem (1338 bytes)
	I1115 10:36:35.287839  387591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:36:35.300609  387591 ssh_runner.go:195] Run: openssl version
	I1115 10:36:35.306757  387591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/58962.pem && ln -fs /usr/share/ca-certificates/58962.pem /etc/ssl/certs/58962.pem"
	I1115 10:36:35.315111  387591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/58962.pem
	I1115 10:36:35.318673  387591 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:47 /usr/share/ca-certificates/58962.pem
	I1115 10:36:35.318726  387591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/58962.pem
	I1115 10:36:35.352595  387591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/58962.pem /etc/ssl/certs/51391683.0"
	I1115 10:36:35.360661  387591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/589622.pem && ln -fs /usr/share/ca-certificates/589622.pem /etc/ssl/certs/589622.pem"
	I1115 10:36:35.369044  387591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/589622.pem
	I1115 10:36:35.373102  387591 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:47 /usr/share/ca-certificates/589622.pem
	I1115 10:36:35.373149  387591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/589622.pem
	I1115 10:36:35.407763  387591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/589622.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:36:35.416805  387591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:36:35.426105  387591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:35.429879  387591 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:41 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:35.429928  387591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:35.464376  387591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:36:35.472689  387591 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:36:35.476537  387591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 10:36:35.513422  387591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 10:36:35.552107  387591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 10:36:35.627892  387591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 10:36:35.738207  387591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 10:36:35.927631  387591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 10:36:36.020791  387591 kubeadm.go:401] StartCluster: {Name:newest-cni-086099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-086099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:36:36.020915  387591 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:36:36.020993  387591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:36:36.054712  387591 cri.go:89] found id: "38ec6363bcab12e98490c10ae199bee3e96e76c3b252eb09eaacbceb79f4fee3"
	I1115 10:36:36.054741  387591 cri.go:89] found id: "dcddb7cd9963badc91f7602efc0c7dd3a6aa66928df5288774cb992a0f211b2c"
	I1115 10:36:36.054748  387591 cri.go:89] found id: "938d8a7a407d113893b3543524a1f018292ab3c06ad38c37347f5c09b4f19aed"
	I1115 10:36:36.054753  387591 cri.go:89] found id: "6799daac297c17fbe94a59a4b23a00cc23bdfe7671f3f31f803b074be4ef25a5"
	I1115 10:36:36.054758  387591 cri.go:89] found id: ""
	I1115 10:36:36.054810  387591 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 10:36:36.122342  387591 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:36:36Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:36:36.122434  387591 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:36:36.132788  387591 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 10:36:36.132807  387591 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 10:36:36.132853  387591 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 10:36:36.144175  387591 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:36:36.145209  387591 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-086099" does not appear in /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:36:36.145870  387591 kubeconfig.go:62] /home/jenkins/minikube-integration/21894-55448/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-086099" cluster setting kubeconfig missing "newest-cni-086099" context setting]
	I1115 10:36:36.146847  387591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:36.149871  387591 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 10:36:36.217177  387591 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1115 10:36:36.217217  387591 kubeadm.go:602] duration metric: took 84.40299ms to restartPrimaryControlPlane
	I1115 10:36:36.217231  387591 kubeadm.go:403] duration metric: took 196.454161ms to StartCluster
	I1115 10:36:36.217253  387591 settings.go:142] acquiring lock: {Name:mk94cfac0f6eef2479181468a7a8082a6e5c8f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:36.217343  387591 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:36:36.218632  387591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:36.218872  387591 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:36:36.218972  387591 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:36:36.219074  387591 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-086099"
	I1115 10:36:36.219094  387591 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-086099"
	W1115 10:36:36.219105  387591 addons.go:248] addon storage-provisioner should already be in state true
	I1115 10:36:36.219138  387591 host.go:66] Checking if "newest-cni-086099" exists ...
	I1115 10:36:36.219158  387591 addons.go:70] Setting dashboard=true in profile "newest-cni-086099"
	I1115 10:36:36.219163  387591 config.go:182] Loaded profile config "newest-cni-086099": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:36.219193  387591 addons.go:239] Setting addon dashboard=true in "newest-cni-086099"
	W1115 10:36:36.219202  387591 addons.go:248] addon dashboard should already be in state true
	I1115 10:36:36.219217  387591 addons.go:70] Setting default-storageclass=true in profile "newest-cni-086099"
	I1115 10:36:36.219235  387591 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-086099"
	I1115 10:36:36.219248  387591 host.go:66] Checking if "newest-cni-086099" exists ...
	I1115 10:36:36.219557  387591 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:36:36.219712  387591 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:36:36.219712  387591 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:36:36.220680  387591 out.go:179] * Verifying Kubernetes components...
	I1115 10:36:36.221665  387591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:36.248161  387591 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:36:36.248172  387591 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1115 10:36:36.249608  387591 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:36:36.249628  387591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:36:36.249683  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:36.249733  387591 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1115 10:36:36.324481  388420 ssh_runner.go:195] Run: systemctl --version
	I1115 10:36:36.336623  388420 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:36:36.372576  388420 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:36:36.377572  388420 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:36:36.377633  388420 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:36:36.385687  388420 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 10:36:36.385710  388420 start.go:496] detecting cgroup driver to use...
	I1115 10:36:36.385740  388420 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:36:36.385776  388420 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:36:36.399728  388420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:36:36.411622  388420 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:36:36.411694  388420 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:36:36.431786  388420 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:36:36.449270  388420 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:36:36.538378  388420 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:36:36.622459  388420 docker.go:234] disabling docker service ...
	I1115 10:36:36.622563  388420 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:36:36.644022  388420 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:36:36.656349  388420 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:36:36.757453  388420 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:36:36.851752  388420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:36:36.864024  388420 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:36:36.878189  388420 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:36:36.878243  388420 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.886869  388420 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:36:36.886944  388420 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.895649  388420 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.904129  388420 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.912660  388420 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:36:36.922601  388420 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.934730  388420 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.945527  388420 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.955227  388420 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:36:36.962702  388420 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:36:36.969927  388420 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:37.064102  388420 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:36:37.181392  388420 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:36:37.181469  388420 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:36:37.185705  388420 start.go:564] Will wait 60s for crictl version
	I1115 10:36:37.185759  388420 ssh_runner.go:195] Run: which crictl
	I1115 10:36:37.189374  388420 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:36:37.214797  388420 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:36:37.214872  388420 ssh_runner.go:195] Run: crio --version
	I1115 10:36:37.247024  388420 ssh_runner.go:195] Run: crio --version
	I1115 10:36:37.283127  388420 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1115 10:36:35.246243  377744 pod_ready.go:104] pod "coredns-66bc5c9577-fjzk5" is not "Ready", error: <nil>
	I1115 10:36:37.246256  377744 pod_ready.go:94] pod "coredns-66bc5c9577-fjzk5" is "Ready"
	I1115 10:36:37.246283  377744 pod_ready.go:86] duration metric: took 33.505674032s for pod "coredns-66bc5c9577-fjzk5" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.248931  377744 pod_ready.go:83] waiting for pod "etcd-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.253449  377744 pod_ready.go:94] pod "etcd-embed-certs-719574" is "Ready"
	I1115 10:36:37.253477  377744 pod_ready.go:86] duration metric: took 4.523106ms for pod "etcd-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.258749  377744 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.262996  377744 pod_ready.go:94] pod "kube-apiserver-embed-certs-719574" is "Ready"
	I1115 10:36:37.263019  377744 pod_ready.go:86] duration metric: took 4.2473ms for pod "kube-apiserver-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.265400  377744 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.444138  377744 pod_ready.go:94] pod "kube-controller-manager-embed-certs-719574" is "Ready"
	I1115 10:36:37.444168  377744 pod_ready.go:86] duration metric: took 178.743562ms for pod "kube-controller-manager-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:37.644722  377744 pod_ready.go:83] waiting for pod "kube-proxy-kmc8c" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:38.044247  377744 pod_ready.go:94] pod "kube-proxy-kmc8c" is "Ready"
	I1115 10:36:38.044277  377744 pod_ready.go:86] duration metric: took 399.527336ms for pod "kube-proxy-kmc8c" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:38.245350  377744 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:38.644894  377744 pod_ready.go:94] pod "kube-scheduler-embed-certs-719574" is "Ready"
	I1115 10:36:38.645014  377744 pod_ready.go:86] duration metric: took 399.62796ms for pod "kube-scheduler-embed-certs-719574" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:38.645030  377744 pod_ready.go:40] duration metric: took 34.90782271s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:36:38.702511  377744 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:36:38.706562  377744 out.go:179] * Done! kubectl is now configured to use "embed-certs-719574" cluster and "default" namespace by default
	I1115 10:36:37.284492  388420 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-026691 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:36:37.302095  388420 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1115 10:36:37.306321  388420 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:36:37.316768  388420 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-026691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-026691 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:36:37.316911  388420 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:36:37.316980  388420 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:36:37.354039  388420 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:36:37.354063  388420 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:36:37.354121  388420 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:36:37.384223  388420 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:36:37.384249  388420 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:36:37.384257  388420 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1115 10:36:37.384353  388420 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-026691 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-026691 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:36:37.384416  388420 ssh_runner.go:195] Run: crio config
	I1115 10:36:37.429588  388420 cni.go:84] Creating CNI manager for ""
	I1115 10:36:37.429616  388420 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:36:37.429637  388420 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:36:37.429663  388420 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-026691 NodeName:default-k8s-diff-port-026691 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:36:37.429840  388420 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-026691"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:36:37.429922  388420 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:36:37.438488  388420 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:36:37.438583  388420 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:36:37.446984  388420 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1115 10:36:37.459608  388420 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:36:37.472652  388420 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1115 10:36:37.484924  388420 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:36:37.488541  388420 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:36:37.498126  388420 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:37.587175  388420 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:36:37.609456  388420 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691 for IP: 192.168.85.2
	I1115 10:36:37.609480  388420 certs.go:195] generating shared ca certs ...
	I1115 10:36:37.609501  388420 certs.go:227] acquiring lock for ca certs: {Name:mkec8c0ab2b564f4fafe1f7fc86089efa994f6db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:37.609671  388420 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key
	I1115 10:36:37.609735  388420 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key
	I1115 10:36:37.609750  388420 certs.go:257] generating profile certs ...
	I1115 10:36:37.609859  388420 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/client.key
	I1115 10:36:37.609921  388420 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.key.f8824eec
	I1115 10:36:37.610007  388420 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.key
	I1115 10:36:37.610146  388420 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem (1338 bytes)
	W1115 10:36:37.610198  388420 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962_empty.pem, impossibly tiny 0 bytes
	I1115 10:36:37.610212  388420 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:36:37.610244  388420 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:36:37.610278  388420 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:36:37.610306  388420 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/certs/key.pem (1679 bytes)
	I1115 10:36:37.610359  388420 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem (1708 bytes)
	I1115 10:36:37.611122  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:36:37.629925  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:36:37.650833  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:36:37.671862  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:36:37.696427  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1115 10:36:37.763348  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 10:36:37.782654  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:36:37.800720  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/default-k8s-diff-port-026691/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:36:37.817628  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:36:37.835327  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/certs/58962.pem --> /usr/share/ca-certificates/58962.pem (1338 bytes)
	I1115 10:36:37.856769  388420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/ssl/certs/589622.pem --> /usr/share/ca-certificates/589622.pem (1708 bytes)
	I1115 10:36:37.876039  388420 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:36:37.891255  388420 ssh_runner.go:195] Run: openssl version
	I1115 10:36:37.898994  388420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/58962.pem && ln -fs /usr/share/ca-certificates/58962.pem /etc/ssl/certs/58962.pem"
	I1115 10:36:37.907571  388420 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/58962.pem
	I1115 10:36:37.912280  388420 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:47 /usr/share/ca-certificates/58962.pem
	I1115 10:36:37.912337  388420 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/58962.pem
	I1115 10:36:37.950692  388420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/58962.pem /etc/ssl/certs/51391683.0"
	I1115 10:36:37.959456  388420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/589622.pem && ln -fs /usr/share/ca-certificates/589622.pem /etc/ssl/certs/589622.pem"
	I1115 10:36:37.968450  388420 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/589622.pem
	I1115 10:36:37.972465  388420 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:47 /usr/share/ca-certificates/589622.pem
	I1115 10:36:37.972521  388420 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/589622.pem
	I1115 10:36:38.008129  388420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/589622.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:36:38.016745  388420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:36:38.027414  388420 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:38.031718  388420 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:41 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:38.031792  388420 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:38.077405  388420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:36:38.086004  388420 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:36:38.089990  388420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 10:36:38.127939  388420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 10:36:38.181791  388420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 10:36:38.256153  388420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 10:36:38.368577  388420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 10:36:38.543333  388420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 10:36:38.645754  388420 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-026691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-026691 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:36:38.645863  388420 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:36:38.645935  388420 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:36:38.685210  388420 cri.go:89] found id: "b04411b3a023360282fda35601339f2000829b38e465e5c4c130b7c58e111bb3"
	I1115 10:36:38.685237  388420 cri.go:89] found id: "971b8e4c2073b59c0dd66594f19cfc1d0b9f81cb1bad24b21946d5ea7b012768"
	I1115 10:36:38.685254  388420 cri.go:89] found id: "6a1db649ea51d34c5661db65312d1c8660828376983f865ce0b3d2801b219c2d"
	I1115 10:36:38.685259  388420 cri.go:89] found id: "58595dd2cf4ce1cb8f740a6594f3ee7a3c7d2587682b4e6c526266c0067303a7"
	I1115 10:36:38.685262  388420 cri.go:89] found id: ""
	I1115 10:36:38.685312  388420 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 10:36:38.750674  388420 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:36:38Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:36:38.750744  388420 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:36:38.769157  388420 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 10:36:38.769186  388420 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 10:36:38.769238  388420 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 10:36:38.842499  388420 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:36:38.845337  388420 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-026691" does not appear in /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:36:38.846840  388420 kubeconfig.go:62] /home/jenkins/minikube-integration/21894-55448/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-026691" cluster setting kubeconfig missing "default-k8s-diff-port-026691" context setting]
	I1115 10:36:38.849516  388420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:38.855210  388420 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 10:36:38.870026  388420 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1115 10:36:38.870059  388420 kubeadm.go:602] duration metric: took 100.86647ms to restartPrimaryControlPlane
	I1115 10:36:38.870073  388420 kubeadm.go:403] duration metric: took 224.328768ms to StartCluster
	I1115 10:36:38.870094  388420 settings.go:142] acquiring lock: {Name:mk94cfac0f6eef2479181468a7a8082a6e5c8f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:38.870172  388420 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:36:38.872536  388420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/kubeconfig: {Name:mkd704b8bdfc88e0f3822576866b75c3c39d3fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:38.872812  388420 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:36:38.873059  388420 config.go:182] Loaded profile config "default-k8s-diff-port-026691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:38.873024  388420 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:36:38.873181  388420 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-026691"
	I1115 10:36:38.873220  388420 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-026691"
	W1115 10:36:38.873240  388420 addons.go:248] addon storage-provisioner should already be in state true
	I1115 10:36:38.873315  388420 host.go:66] Checking if "default-k8s-diff-port-026691" exists ...
	I1115 10:36:38.873258  388420 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-026691"
	I1115 10:36:38.873640  388420 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-026691"
	W1115 10:36:38.873663  388420 addons.go:248] addon dashboard should already be in state true
	I1115 10:36:38.873444  388420 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-026691"
	I1115 10:36:38.873728  388420 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-026691"
	I1115 10:36:38.873753  388420 host.go:66] Checking if "default-k8s-diff-port-026691" exists ...
	I1115 10:36:38.874091  388420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:36:38.874589  388420 out.go:179] * Verifying Kubernetes components...
	I1115 10:36:38.874818  388420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:36:38.875168  388420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:36:38.876706  388420 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:38.907308  388420 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:36:38.907363  388420 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-026691"
	W1115 10:36:38.907464  388420 addons.go:248] addon default-storageclass should already be in state true
	I1115 10:36:38.907503  388420 host.go:66] Checking if "default-k8s-diff-port-026691" exists ...
	I1115 10:36:38.908043  388420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-026691 --format={{.State.Status}}
	I1115 10:36:38.912208  388420 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:36:38.912236  388420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:36:38.912295  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:38.915346  388420 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1115 10:36:38.916793  388420 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1115 10:36:36.250323  387591 addons.go:239] Setting addon default-storageclass=true in "newest-cni-086099"
	W1115 10:36:36.250350  387591 addons.go:248] addon default-storageclass should already be in state true
	I1115 10:36:36.250389  387591 host.go:66] Checking if "newest-cni-086099" exists ...
	I1115 10:36:36.251476  387591 cli_runner.go:164] Run: docker container inspect newest-cni-086099 --format={{.State.Status}}
	I1115 10:36:36.255103  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1115 10:36:36.255128  387591 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1115 10:36:36.255190  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:36.278537  387591 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:36:36.278565  387591 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:36:36.278644  387591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-086099
	I1115 10:36:36.280814  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:36.281721  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:36.296440  387591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/newest-cni-086099/id_rsa Username:docker}
	I1115 10:36:36.630526  387591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:36:36.633566  387591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:36:36.636633  387591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:36:36.638099  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1115 10:36:36.638116  387591 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1115 10:36:36.724472  387591 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:36:36.724559  387591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:36:36.729948  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1115 10:36:36.730015  387591 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1115 10:36:36.826253  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1115 10:36:36.826282  387591 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1115 10:36:36.843537  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1115 10:36:36.843560  387591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1115 10:36:36.931895  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1115 10:36:36.931924  387591 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1115 10:36:36.945766  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1115 10:36:36.945791  387591 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1115 10:36:37.023562  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1115 10:36:37.023593  387591 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1115 10:36:37.038918  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1115 10:36:37.038944  387591 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1115 10:36:37.052909  387591 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:36:37.052937  387591 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1115 10:36:37.119950  387591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:36:39.816288  387591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.182684264s)
	I1115 10:36:40.959315  387591 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.234727667s)
	I1115 10:36:40.959363  387591 api_server.go:72] duration metric: took 4.740464162s to wait for apiserver process to appear ...
	I1115 10:36:40.959371  387591 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:36:40.959395  387591 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1115 10:36:40.959325  387591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.322653976s)
	I1115 10:36:40.959440  387591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.839423734s)
	I1115 10:36:40.962518  387591 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-086099 addons enable metrics-server
	
	I1115 10:36:40.964092  387591 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1115 10:36:38.917819  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1115 10:36:38.917851  388420 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1115 10:36:38.917924  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:38.930932  388420 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:36:38.930982  388420 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:36:38.931053  388420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-026691
	I1115 10:36:38.933702  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:38.939670  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:38.960258  388420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/default-k8s-diff-port-026691/id_rsa Username:docker}
	I1115 10:36:39.257807  388420 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:36:39.264707  388420 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:36:39.270235  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1115 10:36:39.270261  388420 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1115 10:36:39.274532  388420 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-026691" to be "Ready" ...
	I1115 10:36:39.351682  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1115 10:36:39.351725  388420 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1115 10:36:39.357989  388420 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:36:39.374984  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1115 10:36:39.375011  388420 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1115 10:36:39.457352  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1115 10:36:39.457377  388420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1115 10:36:39.542591  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1115 10:36:39.542618  388420 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1115 10:36:39.565925  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1115 10:36:39.566041  388420 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1115 10:36:39.580123  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1115 10:36:39.580242  388420 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1115 10:36:39.655102  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1115 10:36:39.655149  388420 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1115 10:36:39.669218  388420 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:36:39.669246  388420 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1115 10:36:39.683183  388420 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:36:40.965416  387591 addons.go:515] duration metric: took 4.746465999s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1115 10:36:40.965454  387591 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:36:40.965477  387591 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:36:41.460167  387591 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1115 10:36:41.465475  387591 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1115 10:36:41.466642  387591 api_server.go:141] control plane version: v1.34.1
	I1115 10:36:41.466668  387591 api_server.go:131] duration metric: took 507.289044ms to wait for apiserver health ...
	I1115 10:36:41.466679  387591 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:36:41.470116  387591 system_pods.go:59] 8 kube-system pods found
	I1115 10:36:41.470165  387591 system_pods.go:61] "coredns-66bc5c9577-rblh2" [903029e0-3b15-43f3-836a-884de528cbc9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1115 10:36:41.470180  387591 system_pods.go:61] "etcd-newest-cni-086099" [6768a007-08a6-47b0-9917-cf54f577829b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:36:41.470190  387591 system_pods.go:61] "kindnet-2h7mm" [1b25f4e6-5f26-42ce-8ceb-56003682c785] Running
	I1115 10:36:41.470200  387591 system_pods.go:61] "kube-apiserver-newest-cni-086099" [3ca22829-f679-44bf-94e5-e4a368e13dcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:36:41.470210  387591 system_pods.go:61] "kube-controller-manager-newest-cni-086099" [1f45f32a-2d9e-49c0-9c69-d2aa59324564] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:36:41.470219  387591 system_pods.go:61] "kube-proxy-6jpzt" [7409c19f-472b-4074-81d0-8e43ac2bc9d4] Running
	I1115 10:36:41.470226  387591 system_pods.go:61] "kube-scheduler-newest-cni-086099" [c3510e0f-9b51-4fb5-bc6e-d0e47be8f5ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:36:41.470235  387591 system_pods.go:61] "storage-provisioner" [23166a3f-bb02-48ca-ab00-721c8c46525d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1115 10:36:41.470247  387591 system_pods.go:74] duration metric: took 3.560608ms to wait for pod list to return data ...
	I1115 10:36:41.470262  387591 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:36:41.472726  387591 default_sa.go:45] found service account: "default"
	I1115 10:36:41.472751  387591 default_sa.go:55] duration metric: took 2.478273ms for default service account to be created ...
	I1115 10:36:41.472765  387591 kubeadm.go:587] duration metric: took 5.253867745s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1115 10:36:41.472786  387591 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:36:41.475250  387591 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:36:41.475273  387591 node_conditions.go:123] node cpu capacity is 8
	I1115 10:36:41.475284  387591 node_conditions.go:105] duration metric: took 2.490696ms to run NodePressure ...
	I1115 10:36:41.475297  387591 start.go:242] waiting for startup goroutines ...
	I1115 10:36:41.475306  387591 start.go:247] waiting for cluster config update ...
	I1115 10:36:41.475322  387591 start.go:256] writing updated cluster config ...
	I1115 10:36:41.475622  387591 ssh_runner.go:195] Run: rm -f paused
	I1115 10:36:41.529383  387591 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:36:41.531753  387591 out.go:179] * Done! kubectl is now configured to use "newest-cni-086099" cluster and "default" namespace by default
	I1115 10:36:42.149798  388420 node_ready.go:49] node "default-k8s-diff-port-026691" is "Ready"
	I1115 10:36:42.149832  388420 node_ready.go:38] duration metric: took 2.87526393s for node "default-k8s-diff-port-026691" to be "Ready" ...
	I1115 10:36:42.149851  388420 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:36:42.149915  388420 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:36:43.654191  388420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.38943226s)
	I1115 10:36:43.654229  388420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.29621492s)
	I1115 10:36:43.654402  388420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.971169317s)
	I1115 10:36:43.654437  388420 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.50449925s)
	I1115 10:36:43.654474  388420 api_server.go:72] duration metric: took 4.78163246s to wait for apiserver process to appear ...
	I1115 10:36:43.654482  388420 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:36:43.654504  388420 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1115 10:36:43.655988  388420 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-026691 addons enable metrics-server
	
	I1115 10:36:43.659469  388420 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:36:43.659501  388420 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:36:43.660788  388420 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1115 10:36:43.661827  388420 addons.go:515] duration metric: took 4.788813528s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1115 10:36:44.155099  388420 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1115 10:36:44.160271  388420 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1115 10:36:44.161286  388420 api_server.go:141] control plane version: v1.34.1
	I1115 10:36:44.161316  388420 api_server.go:131] duration metric: took 506.825578ms to wait for apiserver health ...
	I1115 10:36:44.161327  388420 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:36:44.164559  388420 system_pods.go:59] 8 kube-system pods found
	I1115 10:36:44.164606  388420 system_pods.go:61] "coredns-66bc5c9577-5q2j4" [e6c4ca54-e0fe-45ee-88a6-33bdccbb876c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:36:44.164622  388420 system_pods.go:61] "etcd-default-k8s-diff-port-026691" [f8228efa-91a3-41fb-9952-3b4063ddf162] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:36:44.164631  388420 system_pods.go:61] "kindnet-hjdrk" [9e1f7579-f5a2-44cd-b77f-71219cd8827d] Running
	I1115 10:36:44.164645  388420 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-026691" [9da86b3b-bfcd-4c40-ac86-ee9d9faeb9b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:36:44.164658  388420 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-026691" [23d9be87-1de1-4c38-b65e-b084cf0fed25] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:36:44.164667  388420 system_pods.go:61] "kube-proxy-c5bw5" [ee48d34b-ae60-4a03-a7bd-df76e089eebb] Running
	I1115 10:36:44.164677  388420 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-026691" [52b3711f-015c-4af6-8672-58642cbc0c16] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:36:44.164686  388420 system_pods.go:61] "storage-provisioner" [7dedf7a9-415d-4260-b225-7ca171744768] Running
	I1115 10:36:44.164696  388420 system_pods.go:74] duration metric: took 3.356326ms to wait for pod list to return data ...
	I1115 10:36:44.164709  388420 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:36:44.166570  388420 default_sa.go:45] found service account: "default"
	I1115 10:36:44.166593  388420 default_sa.go:55] duration metric: took 1.872347ms for default service account to be created ...
	I1115 10:36:44.166603  388420 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:36:44.169425  388420 system_pods.go:86] 8 kube-system pods found
	I1115 10:36:44.169450  388420 system_pods.go:89] "coredns-66bc5c9577-5q2j4" [e6c4ca54-e0fe-45ee-88a6-33bdccbb876c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:36:44.169459  388420 system_pods.go:89] "etcd-default-k8s-diff-port-026691" [f8228efa-91a3-41fb-9952-3b4063ddf162] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:36:44.169467  388420 system_pods.go:89] "kindnet-hjdrk" [9e1f7579-f5a2-44cd-b77f-71219cd8827d] Running
	I1115 10:36:44.169472  388420 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-026691" [9da86b3b-bfcd-4c40-ac86-ee9d9faeb9b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:36:44.169482  388420 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-026691" [23d9be87-1de1-4c38-b65e-b084cf0fed25] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:36:44.169497  388420 system_pods.go:89] "kube-proxy-c5bw5" [ee48d34b-ae60-4a03-a7bd-df76e089eebb] Running
	I1115 10:36:44.169512  388420 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-026691" [52b3711f-015c-4af6-8672-58642cbc0c16] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:36:44.169521  388420 system_pods.go:89] "storage-provisioner" [7dedf7a9-415d-4260-b225-7ca171744768] Running
	I1115 10:36:44.169532  388420 system_pods.go:126] duration metric: took 2.922555ms to wait for k8s-apps to be running ...
	I1115 10:36:44.169541  388420 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:36:44.169593  388420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:36:44.183310  388420 system_svc.go:56] duration metric: took 13.759187ms WaitForService to wait for kubelet
	I1115 10:36:44.183342  388420 kubeadm.go:587] duration metric: took 5.310501278s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:36:44.183366  388420 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:36:44.186800  388420 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:36:44.186826  388420 node_conditions.go:123] node cpu capacity is 8
	I1115 10:36:44.186843  388420 node_conditions.go:105] duration metric: took 3.463462ms to run NodePressure ...
	I1115 10:36:44.186859  388420 start.go:242] waiting for startup goroutines ...
	I1115 10:36:44.186872  388420 start.go:247] waiting for cluster config update ...
	I1115 10:36:44.186896  388420 start.go:256] writing updated cluster config ...
	I1115 10:36:44.187247  388420 ssh_runner.go:195] Run: rm -f paused
	I1115 10:36:44.191349  388420 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:36:44.194864  388420 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5q2j4" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 10:36:46.200419  388420 pod_ready.go:104] pod "coredns-66bc5c9577-5q2j4" is not "Ready", error: <nil>
	W1115 10:36:48.202278  388420 pod_ready.go:104] pod "coredns-66bc5c9577-5q2j4" is not "Ready", error: <nil>
	W1115 10:36:50.700646  388420 pod_ready.go:104] pod "coredns-66bc5c9577-5q2j4" is not "Ready", error: <nil>
	W1115 10:36:53.200685  388420 pod_ready.go:104] pod "coredns-66bc5c9577-5q2j4" is not "Ready", error: <nil>
	W1115 10:36:55.201458  388420 pod_ready.go:104] pod "coredns-66bc5c9577-5q2j4" is not "Ready", error: <nil>
	W1115 10:36:57.700358  388420 pod_ready.go:104] pod "coredns-66bc5c9577-5q2j4" is not "Ready", error: <nil>
	W1115 10:36:59.700839  388420 pod_ready.go:104] pod "coredns-66bc5c9577-5q2j4" is not "Ready", error: <nil>
	W1115 10:37:02.202553  388420 pod_ready.go:104] pod "coredns-66bc5c9577-5q2j4" is not "Ready", error: <nil>
	W1115 10:37:04.700511  388420 pod_ready.go:104] pod "coredns-66bc5c9577-5q2j4" is not "Ready", error: <nil>
	W1115 10:37:07.200845  388420 pod_ready.go:104] pod "coredns-66bc5c9577-5q2j4" is not "Ready", error: <nil>
	W1115 10:37:09.701174  388420 pod_ready.go:104] pod "coredns-66bc5c9577-5q2j4" is not "Ready", error: <nil>
	W1115 10:37:12.201848  388420 pod_ready.go:104] pod "coredns-66bc5c9577-5q2j4" is not "Ready", error: <nil>
	W1115 10:37:14.700490  388420 pod_ready.go:104] pod "coredns-66bc5c9577-5q2j4" is not "Ready", error: <nil>
	W1115 10:37:17.200204  388420 pod_ready.go:104] pod "coredns-66bc5c9577-5q2j4" is not "Ready", error: <nil>
	W1115 10:37:19.200721  388420 pod_ready.go:104] pod "coredns-66bc5c9577-5q2j4" is not "Ready", error: <nil>
	W1115 10:37:21.700622  388420 pod_ready.go:104] pod "coredns-66bc5c9577-5q2j4" is not "Ready", error: <nil>
	I1115 10:37:22.700922  388420 pod_ready.go:94] pod "coredns-66bc5c9577-5q2j4" is "Ready"
	I1115 10:37:22.700972  388420 pod_ready.go:86] duration metric: took 38.506067751s for pod "coredns-66bc5c9577-5q2j4" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:37:22.703455  388420 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:37:22.707198  388420 pod_ready.go:94] pod "etcd-default-k8s-diff-port-026691" is "Ready"
	I1115 10:37:22.707224  388420 pod_ready.go:86] duration metric: took 3.746841ms for pod "etcd-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:37:22.709149  388420 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:37:22.712859  388420 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-026691" is "Ready"
	I1115 10:37:22.712878  388420 pod_ready.go:86] duration metric: took 3.701511ms for pod "kube-apiserver-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:37:22.714646  388420 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:37:22.898389  388420 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-026691" is "Ready"
	I1115 10:37:22.898421  388420 pod_ready.go:86] duration metric: took 183.755678ms for pod "kube-controller-manager-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:37:23.100053  388420 pod_ready.go:83] waiting for pod "kube-proxy-c5bw5" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:37:23.498461  388420 pod_ready.go:94] pod "kube-proxy-c5bw5" is "Ready"
	I1115 10:37:23.498490  388420 pod_ready.go:86] duration metric: took 398.410887ms for pod "kube-proxy-c5bw5" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:37:23.700082  388420 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:37:24.099377  388420 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-026691" is "Ready"
	I1115 10:37:24.099410  388420 pod_ready.go:86] duration metric: took 399.303233ms for pod "kube-scheduler-default-k8s-diff-port-026691" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:37:24.099423  388420 pod_ready.go:40] duration metric: took 39.908043344s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:37:24.145421  388420 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:37:24.147183  388420 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-026691" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 15 10:37:14 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:14.004609762Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:37:14 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:14.004748327Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/bc56cf4ad2338104279fae2ffa4cc6dfcf8114153d42cbd26bac9283ab91bddb/merged/etc/passwd: no such file or directory"
	Nov 15 10:37:14 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:14.004773277Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/bc56cf4ad2338104279fae2ffa4cc6dfcf8114153d42cbd26bac9283ab91bddb/merged/etc/group: no such file or directory"
	Nov 15 10:37:14 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:14.004997265Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:37:14 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:14.035013492Z" level=info msg="Created container c80de6cf1abc0ce1c57707adc801c943013ebf29b303b1b301153401cfb9e7f4: kube-system/storage-provisioner/storage-provisioner" id=88908268-a616-4324-867d-621aa395fb1b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:37:14 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:14.035707379Z" level=info msg="Starting container: c80de6cf1abc0ce1c57707adc801c943013ebf29b303b1b301153401cfb9e7f4" id=c3ce7c4f-4af8-46c3-9c60-27a6d2751901 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:37:14 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:14.037687015Z" level=info msg="Started container" PID=1822 containerID=c80de6cf1abc0ce1c57707adc801c943013ebf29b303b1b301153401cfb9e7f4 description=kube-system/storage-provisioner/storage-provisioner id=c3ce7c4f-4af8-46c3-9c60-27a6d2751901 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0879512bba06072ef6eb046a730037f955ebd87bd96974c51067724cc996fa4f
	Nov 15 10:37:23 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:23.715599535Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:37:23 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:23.720028064Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:37:23 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:23.720052566Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:37:23 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:23.720070105Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:37:23 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:23.723655378Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:37:23 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:23.723686721Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:37:23 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:23.723715483Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:37:23 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:23.727330556Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:37:23 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:23.727352428Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:37:23 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:23.727368911Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:37:23 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:23.730949302Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:37:23 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:23.730987347Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:37:23 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:23.731010013Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:37:23 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:23.73444403Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:37:23 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:23.734470655Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:37:23 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:23.734489535Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:37:23 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:23.738108871Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:37:23 default-k8s-diff-port-026691 crio[681]: time="2025-11-15T10:37:23.738128433Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	c80de6cf1abc0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           26 seconds ago       Running             storage-provisioner         2                   0879512bba060       storage-provisioner                                    kube-system
	8168cf11cc97a       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           32 seconds ago       Exited              dashboard-metrics-scraper   2                   1541e4c08bf7c       dashboard-metrics-scraper-6ffb444bf9-rtx7l             kubernetes-dashboard
	b7dca0b8853e8       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   44 seconds ago       Running             kubernetes-dashboard        0                   21717aa5a6d69       kubernetes-dashboard-855c9754f9-lnfbf                  kubernetes-dashboard
	c835d06b811af       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           56 seconds ago       Running             kube-proxy                  1                   e19687bbe255d       kube-proxy-c5bw5                                       kube-system
	13394bc9d7c66       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago       Exited              storage-provisioner         1                   0879512bba060       storage-provisioner                                    kube-system
	7066ef5abc4bc       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           56 seconds ago       Running             coredns                     1                   72f03cbe17792       coredns-66bc5c9577-5q2j4                               kube-system
	3ae4905c605ef       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago       Running             busybox                     1                   3106611b956e2       busybox                                                default
	de29b76605c3a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           56 seconds ago       Running             kindnet-cni                 1                   38a2086414ec1       kindnet-hjdrk                                          kube-system
	b04411b3a0233       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           About a minute ago   Running             kube-scheduler              1                   69601d853e140       kube-scheduler-default-k8s-diff-port-026691            kube-system
	971b8e4c2073b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           About a minute ago   Running             kube-apiserver              1                   8542ee2316931       kube-apiserver-default-k8s-diff-port-026691            kube-system
	6a1db649ea51d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           About a minute ago   Running             kube-controller-manager     1                   e716f86b369cd       kube-controller-manager-default-k8s-diff-port-026691   kube-system
	58595dd2cf4ce       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           About a minute ago   Running             etcd                        1                   0abfd59608c32       etcd-default-k8s-diff-port-026691                      kube-system
	
	
	==> coredns [7066ef5abc4bc0c6c62f762a419ff0ace9bbf240ada62fbf94eea91e68213566] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57953 - 19015 "HINFO IN 7049713276823466735.5464609015533187721. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.016127809s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-026691
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-026691
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=default-k8s-diff-port-026691
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_35_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:35:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-026691
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:37:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:37:23 +0000   Sat, 15 Nov 2025 10:35:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:37:23 +0000   Sat, 15 Nov 2025 10:35:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:37:23 +0000   Sat, 15 Nov 2025 10:35:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:37:23 +0000   Sat, 15 Nov 2025 10:36:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-026691
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                cb07002a-423d-4a10-9a8e-bf05fe259209
	  Boot ID:                    6a33f504-3a8c-4d49-8fc5-301cf5275efc
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-5q2j4                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m20s
	  kube-system                 etcd-default-k8s-diff-port-026691                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m25s
	  kube-system                 kindnet-hjdrk                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m20s
	  kube-system                 kube-apiserver-default-k8s-diff-port-026691             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-026691    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-proxy-c5bw5                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-scheduler-default-k8s-diff-port-026691             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-rtx7l              0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-lnfbf                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m19s                  kube-proxy       
	  Normal   Starting                 56s                    kube-proxy       
	  Normal   Starting                 2m32s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m32s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m31s (x8 over 2m32s)  kubelet          Node default-k8s-diff-port-026691 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m31s (x8 over 2m32s)  kubelet          Node default-k8s-diff-port-026691 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m31s (x8 over 2m32s)  kubelet          Node default-k8s-diff-port-026691 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m25s                  kubelet          Node default-k8s-diff-port-026691 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m25s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m25s                  kubelet          Node default-k8s-diff-port-026691 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m25s                  kubelet          Node default-k8s-diff-port-026691 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m25s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m21s                  node-controller  Node default-k8s-diff-port-026691 event: Registered Node default-k8s-diff-port-026691 in Controller
	  Normal   NodeReady                99s                    kubelet          Node default-k8s-diff-port-026691 status is now: NodeReady
	  Normal   Starting                 63s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  63s (x8 over 63s)      kubelet          Node default-k8s-diff-port-026691 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s (x8 over 63s)      kubelet          Node default-k8s-diff-port-026691 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s (x8 over 63s)      kubelet          Node default-k8s-diff-port-026691 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           55s                    node-controller  Node default-k8s-diff-port-026691 event: Registered Node default-k8s-diff-port-026691 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 7f 53 f9 15 93 08 06
	[  +0.017821] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff c6 66 25 e9 45 28 08 06
	[ +19.178294] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 a3 65 b9 28 15 08 06
	[  +0.347929] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 6d df 33 a5 ad 08 06
	[  +0.000319] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff a2 7f 53 f9 15 93 08 06
	[Nov15 10:33] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 96 ed 10 e8 9a 80 08 06
	[  +0.000481] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 a3 65 b9 28 15 08 06
	[ +35.214217] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 c2 9a 56 52 0c 08 06
	[  +9.104720] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 c2 c4 d1 7c dd 08 06
	[Nov15 10:34] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 68 1e 80 53 05 08 06
	[  +0.000444] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 c2 c4 d1 7c dd 08 06
	[ +18.836046] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 98 db 78 4f 0b 08 06
	[  +0.000708] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 c2 9a 56 52 0c 08 06
	
	
	==> etcd [58595dd2cf4ce1cb8f740a6594f3ee7a3c7d2587682b4e6c526266c0067303a7] <==
	{"level":"warn","ts":"2025-11-15T10:36:40.682609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:40.747329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:40.757934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:40.766589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:40.773794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:40.781907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:40.791044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:40.845440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:40.855152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:40.864761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:40.871212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:40.879161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:40.885852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:40.945550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:40.954602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:40.963938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:40.973480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:40.981458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:40.987505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:40.993865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:41.048852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:41.066044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:41.074303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:41.080618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:41.186388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38196","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:37:40 up  2:19,  0 user,  load average: 1.77, 3.62, 2.67
	Linux default-k8s-diff-port-026691 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [de29b76605c3aef2cd62d1da1ab7845a60a8a7dbe6ba39ecfdbf9ae60a3a31d8] <==
	I1115 10:36:43.444727       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:36:43.445023       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1115 10:36:43.445247       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:36:43.445266       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:36:43.445292       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:36:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:36:43.714933       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:36:43.741851       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:36:43.742181       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1115 10:36:43.742212       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	E1115 10:37:13.716427       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1115 10:37:13.716492       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1115 10:37:13.742649       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1115 10:37:13.743767       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1115 10:37:15.043284       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:37:15.043323       1 metrics.go:72] Registering metrics
	I1115 10:37:15.043430       1 controller.go:711] "Syncing nftables rules"
	I1115 10:37:23.715281       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 10:37:23.715361       1 main.go:301] handling current node
	I1115 10:37:33.715509       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 10:37:33.715560       1 main.go:301] handling current node
	
	
	==> kube-apiserver [971b8e4c2073b59c0dd66594f19cfc1d0b9f81cb1bad24b21946d5ea7b012768] <==
	I1115 10:36:42.246198       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 10:36:42.246205       1 cache.go:39] Caches are synced for autoregister controller
	I1115 10:36:42.246345       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1115 10:36:42.246382       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1115 10:36:42.246471       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1115 10:36:42.246492       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1115 10:36:42.246641       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 10:36:42.246719       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1115 10:36:42.246727       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1115 10:36:42.246728       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1115 10:36:42.254385       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1115 10:36:42.254517       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1115 10:36:42.259556       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1115 10:36:42.748090       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 10:36:42.895642       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 10:36:42.973937       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 10:36:43.045724       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:36:43.057747       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:36:43.071656       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:36:43.269659       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.34.16"}
	I1115 10:36:43.349124       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.149.205"}
	I1115 10:36:45.705040       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 10:36:46.004858       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 10:36:46.004858       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 10:36:46.054439       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [6a1db649ea51d34c5661db65312d1c8660828376983f865ce0b3d2801b219c2d] <==
	I1115 10:36:45.451278       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1115 10:36:45.451472       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1115 10:36:45.451291       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 10:36:45.451542       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-026691"
	I1115 10:36:45.451333       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1115 10:36:45.451609       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1115 10:36:45.451343       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1115 10:36:45.451316       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1115 10:36:45.452992       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1115 10:36:45.453644       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1115 10:36:45.455153       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 10:36:45.455425       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:36:45.455488       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1115 10:36:45.455491       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 10:36:45.455633       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1115 10:36:45.455504       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 10:36:45.457014       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1115 10:36:45.457882       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1115 10:36:45.462117       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1115 10:36:45.466020       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1115 10:36:45.467216       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1115 10:36:45.470651       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:36:45.503580       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:36:45.503601       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 10:36:45.503610       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [c835d06b811afe7277524798594204743e6b5c98eb025ff53b5a2bbdf7a96794] <==
	I1115 10:36:43.585537       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:36:43.711194       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:36:43.811824       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:36:43.811887       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1115 10:36:43.812030       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:36:43.830921       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:36:43.831005       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:36:43.836768       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:36:43.837251       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:36:43.837288       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:36:43.839185       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:36:43.839209       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:36:43.839274       1 config.go:200] "Starting service config controller"
	I1115 10:36:43.839644       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:36:43.839671       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:36:43.839685       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:36:43.839814       1 config.go:309] "Starting node config controller"
	I1115 10:36:43.839847       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:36:43.839855       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:36:43.939411       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 10:36:43.940138       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 10:36:43.940149       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [b04411b3a023360282fda35601339f2000829b38e465e5c4c130b7c58e111bb3] <==
	I1115 10:36:40.077537       1 serving.go:386] Generated self-signed cert in-memory
	W1115 10:36:42.146529       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1115 10:36:42.146566       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1115 10:36:42.146579       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1115 10:36:42.146589       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1115 10:36:42.247541       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1115 10:36:42.247777       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:36:42.252142       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:36:42.252199       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:36:42.252850       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 10:36:42.252920       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 10:36:42.353299       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 10:36:46 default-k8s-diff-port-026691 kubelet[840]: I1115 10:36:46.068659     840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/230beb1a-4842-4cb2-b64f-07d59686ef2c-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-lnfbf\" (UID: \"230beb1a-4842-4cb2-b64f-07d59686ef2c\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lnfbf"
	Nov 15 10:36:46 default-k8s-diff-port-026691 kubelet[840]: W1115 10:36:46.242882     840 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/acb25a518a850a99b04a9ecc3ee276eeb45082aa05801ad29c50dc8761212798/crio-1541e4c08bf7c68bff0c2d52ab6dccdfab97bf5475884f904c0f2ee819a77479 WatchSource:0}: Error finding container 1541e4c08bf7c68bff0c2d52ab6dccdfab97bf5475884f904c0f2ee819a77479: Status 404 returned error can't find the container with id 1541e4c08bf7c68bff0c2d52ab6dccdfab97bf5475884f904c0f2ee819a77479
	Nov 15 10:36:46 default-k8s-diff-port-026691 kubelet[840]: W1115 10:36:46.252112     840 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/acb25a518a850a99b04a9ecc3ee276eeb45082aa05801ad29c50dc8761212798/crio-21717aa5a6d697df70c9e71b74b7d7b24dc009e4f909506454bb0837c2d99722 WatchSource:0}: Error finding container 21717aa5a6d697df70c9e71b74b7d7b24dc009e4f909506454bb0837c2d99722: Status 404 returned error can't find the container with id 21717aa5a6d697df70c9e71b74b7d7b24dc009e4f909506454bb0837c2d99722
	Nov 15 10:36:49 default-k8s-diff-port-026691 kubelet[840]: I1115 10:36:49.884137     840 scope.go:117] "RemoveContainer" containerID="11cf4e27ed86db07329d5e4d3a9ba83f4cb58b48af47eb26166bbf4b7788089e"
	Nov 15 10:36:50 default-k8s-diff-port-026691 kubelet[840]: I1115 10:36:50.889226     840 scope.go:117] "RemoveContainer" containerID="11cf4e27ed86db07329d5e4d3a9ba83f4cb58b48af47eb26166bbf4b7788089e"
	Nov 15 10:36:50 default-k8s-diff-port-026691 kubelet[840]: I1115 10:36:50.889401     840 scope.go:117] "RemoveContainer" containerID="d137bb0de6482bc21bf26ec2b5a8cf6fcbae6e8f37c1ee370da009a0dcbdb52d"
	Nov 15 10:36:50 default-k8s-diff-port-026691 kubelet[840]: E1115 10:36:50.889590     840 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rtx7l_kubernetes-dashboard(16880d02-ce40-4d30-8524-6f66aa5404f5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rtx7l" podUID="16880d02-ce40-4d30-8524-6f66aa5404f5"
	Nov 15 10:36:51 default-k8s-diff-port-026691 kubelet[840]: I1115 10:36:51.894021     840 scope.go:117] "RemoveContainer" containerID="d137bb0de6482bc21bf26ec2b5a8cf6fcbae6e8f37c1ee370da009a0dcbdb52d"
	Nov 15 10:36:51 default-k8s-diff-port-026691 kubelet[840]: E1115 10:36:51.894427     840 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rtx7l_kubernetes-dashboard(16880d02-ce40-4d30-8524-6f66aa5404f5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rtx7l" podUID="16880d02-ce40-4d30-8524-6f66aa5404f5"
	Nov 15 10:36:52 default-k8s-diff-port-026691 kubelet[840]: I1115 10:36:52.194230     840 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 15 10:36:53 default-k8s-diff-port-026691 kubelet[840]: I1115 10:36:53.363470     840 scope.go:117] "RemoveContainer" containerID="d137bb0de6482bc21bf26ec2b5a8cf6fcbae6e8f37c1ee370da009a0dcbdb52d"
	Nov 15 10:36:53 default-k8s-diff-port-026691 kubelet[840]: E1115 10:36:53.363704     840 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rtx7l_kubernetes-dashboard(16880d02-ce40-4d30-8524-6f66aa5404f5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rtx7l" podUID="16880d02-ce40-4d30-8524-6f66aa5404f5"
	Nov 15 10:36:55 default-k8s-diff-port-026691 kubelet[840]: I1115 10:36:55.960798     840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lnfbf" podStartSLOduration=1.908473374 podStartE2EDuration="10.960772714s" podCreationTimestamp="2025-11-15 10:36:45 +0000 UTC" firstStartedPulling="2025-11-15 10:36:46.254673472 +0000 UTC m=+8.642502283" lastFinishedPulling="2025-11-15 10:36:55.306972823 +0000 UTC m=+17.694801623" observedRunningTime="2025-11-15 10:36:55.960417775 +0000 UTC m=+18.348246594" watchObservedRunningTime="2025-11-15 10:36:55.960772714 +0000 UTC m=+18.348601533"
	Nov 15 10:37:07 default-k8s-diff-port-026691 kubelet[840]: I1115 10:37:07.757450     840 scope.go:117] "RemoveContainer" containerID="d137bb0de6482bc21bf26ec2b5a8cf6fcbae6e8f37c1ee370da009a0dcbdb52d"
	Nov 15 10:37:07 default-k8s-diff-port-026691 kubelet[840]: I1115 10:37:07.980434     840 scope.go:117] "RemoveContainer" containerID="d137bb0de6482bc21bf26ec2b5a8cf6fcbae6e8f37c1ee370da009a0dcbdb52d"
	Nov 15 10:37:07 default-k8s-diff-port-026691 kubelet[840]: I1115 10:37:07.980665     840 scope.go:117] "RemoveContainer" containerID="8168cf11cc97ab90349686f8d781936b53a8680baa790ca9124eb4fee99df98d"
	Nov 15 10:37:07 default-k8s-diff-port-026691 kubelet[840]: E1115 10:37:07.980886     840 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rtx7l_kubernetes-dashboard(16880d02-ce40-4d30-8524-6f66aa5404f5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rtx7l" podUID="16880d02-ce40-4d30-8524-6f66aa5404f5"
	Nov 15 10:37:13 default-k8s-diff-port-026691 kubelet[840]: I1115 10:37:13.363598     840 scope.go:117] "RemoveContainer" containerID="8168cf11cc97ab90349686f8d781936b53a8680baa790ca9124eb4fee99df98d"
	Nov 15 10:37:13 default-k8s-diff-port-026691 kubelet[840]: E1115 10:37:13.363788     840 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rtx7l_kubernetes-dashboard(16880d02-ce40-4d30-8524-6f66aa5404f5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rtx7l" podUID="16880d02-ce40-4d30-8524-6f66aa5404f5"
	Nov 15 10:37:13 default-k8s-diff-port-026691 kubelet[840]: I1115 10:37:13.997322     840 scope.go:117] "RemoveContainer" containerID="13394bc9d7c6680728c8c7f5b7c939c8bf8ddf701e93585d0d249b4debb8779d"
	Nov 15 10:37:27 default-k8s-diff-port-026691 kubelet[840]: I1115 10:37:27.757987     840 scope.go:117] "RemoveContainer" containerID="8168cf11cc97ab90349686f8d781936b53a8680baa790ca9124eb4fee99df98d"
	Nov 15 10:37:27 default-k8s-diff-port-026691 kubelet[840]: E1115 10:37:27.758244     840 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rtx7l_kubernetes-dashboard(16880d02-ce40-4d30-8524-6f66aa5404f5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rtx7l" podUID="16880d02-ce40-4d30-8524-6f66aa5404f5"
	Nov 15 10:37:36 default-k8s-diff-port-026691 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 10:37:36 default-k8s-diff-port-026691 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 10:37:36 default-k8s-diff-port-026691 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [b7dca0b8853e890d3a900bd3933f60ce1727d329f35d122cf14fd332ab681fb0] <==
	2025/11/15 10:36:55 Starting overwatch
	2025/11/15 10:36:55 Using namespace: kubernetes-dashboard
	2025/11/15 10:36:55 Using in-cluster config to connect to apiserver
	2025/11/15 10:36:55 Using secret token for csrf signing
	2025/11/15 10:36:55 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/15 10:36:55 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/15 10:36:55 Successful initial request to the apiserver, version: v1.34.1
	2025/11/15 10:36:55 Generating JWE encryption key
	2025/11/15 10:36:55 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/15 10:36:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/15 10:36:55 Initializing JWE encryption key from synchronized object
	2025/11/15 10:36:55 Creating in-cluster Sidecar client
	2025/11/15 10:36:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 10:36:55 Serving insecurely on HTTP port: 9090
	2025/11/15 10:37:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [13394bc9d7c6680728c8c7f5b7c939c8bf8ddf701e93585d0d249b4debb8779d] <==
	I1115 10:36:43.552239       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1115 10:37:13.556514       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [c80de6cf1abc0ce1c57707adc801c943013ebf29b303b1b301153401cfb9e7f4] <==
	I1115 10:37:14.050452       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 10:37:14.058749       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 10:37:14.058792       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1115 10:37:14.060926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:17.515637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:21.776464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:25.374895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:28.428826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:31.451139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:31.455380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:37:31.455523       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 10:37:31.455609       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e5b7cf19-8a06-483d-895a-a97445d789b0", APIVersion:"v1", ResourceVersion:"687", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-026691_09f8484c-9adc-4192-b44b-479815d28210 became leader
	I1115 10:37:31.455657       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-026691_09f8484c-9adc-4192-b44b-479815d28210!
	W1115 10:37:31.457391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:31.460850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:37:31.555909       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-026691_09f8484c-9adc-4192-b44b-479815d28210!
	W1115 10:37:33.463564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:33.467924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:35.470761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:35.474935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:37.478529       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:37.483343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:39.486240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:39.490910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-026691 -n default-k8s-diff-port-026691
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-026691 -n default-k8s-diff-port-026691: exit status 2 (327.419656ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-026691 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (5.16s)

                                                
                                    

Test pass (264/328)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 28.62
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 15.22
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.22
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.15
20 TestDownloadOnlyKic 0.42
21 TestBinaryMirror 0.83
22 TestOffline 88.7
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 167.45
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 12.47
48 TestAddons/StoppedEnableDisable 12.41
49 TestCertOptions 28.19
50 TestCertExpiration 231.58
52 TestForceSystemdFlag 32.51
53 TestForceSystemdEnv 35.46
58 TestErrorSpam/setup 20.74
59 TestErrorSpam/start 0.66
60 TestErrorSpam/status 0.95
61 TestErrorSpam/pause 6.18
62 TestErrorSpam/unpause 5.68
63 TestErrorSpam/stop 1.52
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 73.46
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 30.28
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 3
75 TestFunctional/serial/CacheCmd/cache/add_local 2.37
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.63
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 27.36
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.21
86 TestFunctional/serial/LogsFileCmd 1.23
87 TestFunctional/serial/InvalidService 3.98
89 TestFunctional/parallel/ConfigCmd 0.47
90 TestFunctional/parallel/DashboardCmd 11.29
91 TestFunctional/parallel/DryRun 0.38
92 TestFunctional/parallel/InternationalLanguage 0.17
93 TestFunctional/parallel/StatusCmd 0.98
98 TestFunctional/parallel/AddonsCmd 0.15
99 TestFunctional/parallel/PersistentVolumeClaim 37.14
101 TestFunctional/parallel/SSHCmd 0.62
102 TestFunctional/parallel/CpCmd 1.86
103 TestFunctional/parallel/MySQL 21.23
104 TestFunctional/parallel/FileSync 0.4
105 TestFunctional/parallel/CertSync 2.01
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.56
113 TestFunctional/parallel/License 0.81
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.56
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 19.25
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
127 TestFunctional/parallel/ProfileCmd/profile_list 0.4
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
129 TestFunctional/parallel/MountCmd/any-port 8.6
130 TestFunctional/parallel/MountCmd/specific-port 1.58
131 TestFunctional/parallel/MountCmd/VerifyCleanup 1.74
132 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
133 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
134 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
135 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
136 TestFunctional/parallel/ImageCommands/ImageBuild 6.99
137 TestFunctional/parallel/ImageCommands/Setup 2.76
142 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
145 TestFunctional/parallel/Version/short 0.07
146 TestFunctional/parallel/Version/components 0.51
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
150 TestFunctional/parallel/ServiceCmd/List 1.71
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.7
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 169.26
163 TestMultiControlPlane/serial/DeployApp 7.7
164 TestMultiControlPlane/serial/PingHostFromPods 1.1
165 TestMultiControlPlane/serial/AddWorkerNode 27.41
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.9
168 TestMultiControlPlane/serial/CopyFile 17.01
169 TestMultiControlPlane/serial/StopSecondaryNode 12.75
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.72
171 TestMultiControlPlane/serial/RestartSecondaryNode 28.43
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.26
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 123.96
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.76
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.73
176 TestMultiControlPlane/serial/StopCluster 36.81
177 TestMultiControlPlane/serial/RestartCluster 116.62
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.69
179 TestMultiControlPlane/serial/AddSecondaryNode 43.15
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.89
185 TestJSONOutput/start/Command 72.33
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.85
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.23
210 TestKicCustomNetwork/create_custom_network 39.33
211 TestKicCustomNetwork/use_default_bridge_network 24.82
212 TestKicExistingNetwork 25.78
213 TestKicCustomSubnet 26.88
214 TestKicStaticIP 28.49
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 52.58
219 TestMountStart/serial/StartWithMountFirst 7.61
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 7.8
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.71
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.29
226 TestMountStart/serial/RestartStopped 8.01
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 130.33
231 TestMultiNode/serial/DeployApp2Nodes 7.26
232 TestMultiNode/serial/PingHostFrom2Pods 0.79
233 TestMultiNode/serial/AddNode 24.52
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.66
236 TestMultiNode/serial/CopyFile 9.67
237 TestMultiNode/serial/StopNode 2.29
238 TestMultiNode/serial/StartAfterStop 7.01
239 TestMultiNode/serial/RestartKeepsNodes 72.91
240 TestMultiNode/serial/DeleteNode 5.26
241 TestMultiNode/serial/StopMultiNode 24.03
242 TestMultiNode/serial/RestartMultiNode 47.09
243 TestMultiNode/serial/ValidateNameConflict 28.72
248 TestPreload 127.73
250 TestScheduledStopUnix 99.7
253 TestInsufficientStorage 10.2
254 TestRunningBinaryUpgrade 44.93
256 TestKubernetesUpgrade 335.22
257 TestMissingContainerUpgrade 125.69
259 TestPause/serial/Start 80.06
261 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
262 TestNoKubernetes/serial/StartWithK8s 38.34
266 TestNoKubernetes/serial/StartWithStopK8s 5.55
271 TestNetworkPlugins/group/false 4.04
275 TestNoKubernetes/serial/Start 7.52
276 TestStoppedBinaryUpgrade/Setup 3.83
277 TestStoppedBinaryUpgrade/Upgrade 113.16
278 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
279 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
280 TestNoKubernetes/serial/ProfileList 1.42
281 TestNoKubernetes/serial/Stop 1.3
282 TestNoKubernetes/serial/StartNoArgs 8.87
283 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
284 TestPause/serial/SecondStartNoReconfiguration 27.36
286 TestStoppedBinaryUpgrade/MinikubeLogs 0.93
294 TestNetworkPlugins/group/auto/Start 74.96
295 TestNetworkPlugins/group/kindnet/Start 71.82
296 TestNetworkPlugins/group/auto/KubeletFlags 0.29
297 TestNetworkPlugins/group/auto/NetCatPod 9.21
298 TestNetworkPlugins/group/auto/DNS 0.12
299 TestNetworkPlugins/group/auto/Localhost 0.09
300 TestNetworkPlugins/group/auto/HairPin 0.09
301 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
302 TestNetworkPlugins/group/kindnet/KubeletFlags 0.33
303 TestNetworkPlugins/group/kindnet/NetCatPod 10.23
304 TestNetworkPlugins/group/calico/Start 53.44
305 TestNetworkPlugins/group/kindnet/DNS 0.13
306 TestNetworkPlugins/group/kindnet/Localhost 0.12
307 TestNetworkPlugins/group/kindnet/HairPin 0.12
308 TestNetworkPlugins/group/custom-flannel/Start 60.29
309 TestNetworkPlugins/group/enable-default-cni/Start 39.23
310 TestNetworkPlugins/group/calico/ControllerPod 6.01
311 TestNetworkPlugins/group/calico/KubeletFlags 0.3
312 TestNetworkPlugins/group/calico/NetCatPod 10.2
313 TestNetworkPlugins/group/calico/DNS 0.13
314 TestNetworkPlugins/group/calico/Localhost 0.13
315 TestNetworkPlugins/group/calico/HairPin 0.11
316 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
317 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.2
318 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
319 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
320 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
321 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
322 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.24
323 TestNetworkPlugins/group/flannel/Start 54.05
324 TestNetworkPlugins/group/custom-flannel/DNS 0.15
325 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
326 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
327 TestNetworkPlugins/group/bridge/Start 67.33
329 TestStartStop/group/old-k8s-version/serial/FirstStart 57.9
331 TestStartStop/group/no-preload/serial/FirstStart 58.14
332 TestNetworkPlugins/group/flannel/ControllerPod 6.01
333 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
334 TestNetworkPlugins/group/flannel/NetCatPod 9.2
335 TestNetworkPlugins/group/flannel/DNS 0.12
336 TestNetworkPlugins/group/flannel/Localhost 0.11
337 TestNetworkPlugins/group/flannel/HairPin 0.11
338 TestStartStop/group/old-k8s-version/serial/DeployApp 11.26
339 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
340 TestNetworkPlugins/group/bridge/NetCatPod 10.28
342 TestStartStop/group/old-k8s-version/serial/Stop 12.21
343 TestNetworkPlugins/group/bridge/DNS 0.15
344 TestStartStop/group/no-preload/serial/DeployApp 10.27
345 TestNetworkPlugins/group/bridge/Localhost 0.12
346 TestNetworkPlugins/group/bridge/HairPin 0.1
348 TestStartStop/group/embed-certs/serial/FirstStart 49.84
350 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.24
351 TestStartStop/group/old-k8s-version/serial/SecondStart 49.86
352 TestStartStop/group/no-preload/serial/Stop 12.77
354 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 72.01
355 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.25
356 TestStartStop/group/no-preload/serial/SecondStart 55.92
357 TestStartStop/group/embed-certs/serial/DeployApp 11.22
358 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
360 TestStartStop/group/embed-certs/serial/Stop 12.19
361 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
362 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
364 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
365 TestStartStop/group/embed-certs/serial/SecondStart 49.61
367 TestStartStop/group/newest-cni/serial/FirstStart 33.5
368 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
369 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
370 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
372 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 12.28
374 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.06
375 TestStartStop/group/newest-cni/serial/DeployApp 0
377 TestStartStop/group/newest-cni/serial/Stop 1.34
378 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
379 TestStartStop/group/newest-cni/serial/SecondStart 13.06
380 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
381 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 53.35
382 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
383 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
384 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
385 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
387 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
388 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
390 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
391 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
392 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
x
+
TestDownloadOnly/v1.28.0/json-events (28.62s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-491395 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-491395 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (28.619298258s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (28.62s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1115 09:40:48.305288   58962 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1115 09:40:48.305363   58962 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-491395
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-491395: exit status 85 (73.366257ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-491395 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-491395 │ jenkins │ v1.37.0 │ 15 Nov 25 09:40 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 09:40:19
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 09:40:19.737950   58974 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:40:19.738101   58974 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:40:19.738114   58974 out.go:374] Setting ErrFile to fd 2...
	I1115 09:40:19.738120   58974 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:40:19.738342   58974 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	W1115 09:40:19.738465   58974 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21894-55448/.minikube/config/config.json: open /home/jenkins/minikube-integration/21894-55448/.minikube/config/config.json: no such file or directory
	I1115 09:40:19.738941   58974 out.go:368] Setting JSON to true
	I1115 09:40:19.739972   58974 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":4957,"bootTime":1763194663,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 09:40:19.740028   58974 start.go:143] virtualization: kvm guest
	I1115 09:40:19.742173   58974 out.go:99] [download-only-491395] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 09:40:19.742309   58974 notify.go:221] Checking for updates...
	W1115 09:40:19.742373   58974 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball: no such file or directory
	I1115 09:40:19.743639   58974 out.go:171] MINIKUBE_LOCATION=21894
	I1115 09:40:19.745558   58974 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:40:19.746788   58974 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 09:40:19.747926   58974 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-55448/.minikube
	I1115 09:40:19.748941   58974 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1115 09:40:19.750884   58974 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1115 09:40:19.751123   58974 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:40:19.773478   58974 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 09:40:19.773571   58974 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:40:20.123343   58974 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:true NGoroutines:66 SystemTime:2025-11-15 09:40:20.114095694 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:40:20.123448   58974 docker.go:319] overlay module found
	I1115 09:40:20.125193   58974 out.go:99] Using the docker driver based on user configuration
	I1115 09:40:20.125227   58974 start.go:309] selected driver: docker
	I1115 09:40:20.125238   58974 start.go:930] validating driver "docker" against <nil>
	I1115 09:40:20.125321   58974 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:40:20.182249   58974 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:true NGoroutines:66 SystemTime:2025-11-15 09:40:20.17297494 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be removed
by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:40:20.182450   58974 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 09:40:20.182987   58974 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1115 09:40:20.183157   58974 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1115 09:40:20.184896   58974 out.go:171] Using Docker driver with root privileges
	I1115 09:40:20.185966   58974 cni.go:84] Creating CNI manager for ""
	I1115 09:40:20.186030   58974 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 09:40:20.186041   58974 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 09:40:20.186107   58974 start.go:353] cluster config:
	{Name:download-only-491395 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-491395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:40:20.187380   58974 out.go:99] Starting "download-only-491395" primary control-plane node in "download-only-491395" cluster
	I1115 09:40:20.187402   58974 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 09:40:20.188628   58974 out.go:99] Pulling base image v0.0.48-1761985721-21837 ...
	I1115 09:40:20.188674   58974 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1115 09:40:20.188783   58974 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 09:40:20.205732   58974 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1115 09:40:20.205911   58974 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory
	I1115 09:40:20.206018   58974 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1115 09:40:20.340320   58974 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1115 09:40:20.340372   58974 cache.go:65] Caching tarball of preloaded images
	I1115 09:40:20.340543   58974 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1115 09:40:20.342411   58974 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1115 09:40:20.342429   58974 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1115 09:40:20.500355   58974 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1115 09:40:20.500517   58974 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1115 09:40:35.796797   58974 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1115 09:40:35.797192   58974 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/download-only-491395/config.json ...
	I1115 09:40:35.797234   58974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/download-only-491395/config.json: {Name:mke1e4b0a05198bcb8015b98c1c3db1f57676363 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:40:35.797423   58974 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1115 09:40:35.797587   58974 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21894-55448/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-491395 host does not exist
	  To start a cluster, run: "minikube start -p download-only-491395"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-491395
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (15.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-233430 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-233430 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (15.22395927s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (15.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1115 09:41:03.962616   58962 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1115 09:41:03.962665   58962 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-233430
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-233430: exit status 85 (73.863129ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-491395 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-491395 │ jenkins │ v1.37.0 │ 15 Nov 25 09:40 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 15 Nov 25 09:40 UTC │ 15 Nov 25 09:40 UTC │
	│ delete  │ -p download-only-491395                                                                                                                                                   │ download-only-491395 │ jenkins │ v1.37.0 │ 15 Nov 25 09:40 UTC │ 15 Nov 25 09:40 UTC │
	│ start   │ -o=json --download-only -p download-only-233430 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-233430 │ jenkins │ v1.37.0 │ 15 Nov 25 09:40 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 09:40:48
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 09:40:48.789199   59418 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:40:48.789305   59418 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:40:48.789314   59418 out.go:374] Setting ErrFile to fd 2...
	I1115 09:40:48.789318   59418 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:40:48.789518   59418 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 09:40:48.789937   59418 out.go:368] Setting JSON to true
	I1115 09:40:48.790767   59418 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":4986,"bootTime":1763194663,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 09:40:48.790829   59418 start.go:143] virtualization: kvm guest
	I1115 09:40:48.792424   59418 out.go:99] [download-only-233430] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 09:40:48.792591   59418 notify.go:221] Checking for updates...
	I1115 09:40:48.794184   59418 out.go:171] MINIKUBE_LOCATION=21894
	I1115 09:40:48.795345   59418 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:40:48.796510   59418 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 09:40:48.801423   59418 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-55448/.minikube
	I1115 09:40:48.802471   59418 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1115 09:40:48.804437   59418 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1115 09:40:48.804660   59418 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:40:48.829322   59418 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 09:40:48.829422   59418 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:40:48.884821   59418 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:46 SystemTime:2025-11-15 09:40:48.875696519 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:40:48.884984   59418 docker.go:319] overlay module found
	I1115 09:40:48.886634   59418 out.go:99] Using the docker driver based on user configuration
	I1115 09:40:48.886666   59418 start.go:309] selected driver: docker
	I1115 09:40:48.886677   59418 start.go:930] validating driver "docker" against <nil>
	I1115 09:40:48.886770   59418 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:40:48.942789   59418 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:46 SystemTime:2025-11-15 09:40:48.933890857 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:40:48.942942   59418 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 09:40:48.943421   59418 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1115 09:40:48.943568   59418 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1115 09:40:48.945255   59418 out.go:171] Using Docker driver with root privileges
	I1115 09:40:48.946274   59418 cni.go:84] Creating CNI manager for ""
	I1115 09:40:48.946331   59418 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 09:40:48.946341   59418 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 09:40:48.946396   59418 start.go:353] cluster config:
	{Name:download-only-233430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-233430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:40:48.947492   59418 out.go:99] Starting "download-only-233430" primary control-plane node in "download-only-233430" cluster
	I1115 09:40:48.947507   59418 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 09:40:48.948545   59418 out.go:99] Pulling base image v0.0.48-1761985721-21837 ...
	I1115 09:40:48.948576   59418 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:40:48.948614   59418 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 09:40:48.964836   59418 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1115 09:40:48.964983   59418 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory
	I1115 09:40:48.965004   59418 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory, skipping pull
	I1115 09:40:48.965009   59418 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in cache, skipping pull
	I1115 09:40:48.965017   59418 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 as a tarball
	I1115 09:40:49.095187   59418 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1115 09:40:49.095265   59418 cache.go:65] Caching tarball of preloaded images
	I1115 09:40:49.095486   59418 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:40:49.097225   59418 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1115 09:40:49.097241   59418 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1115 09:40:49.253308   59418 preload.go:295] Got checksum from GCS API "d1a46823b9241c5d38b5e0866197f2a8"
	I1115 09:40:49.253367   59418 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:d1a46823b9241c5d38b5e0866197f2a8 -> /home/jenkins/minikube-integration/21894-55448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1115 09:41:01.809768   59418 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 09:41:01.810200   59418 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/download-only-233430/config.json ...
	I1115 09:41:01.810237   59418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/download-only-233430/config.json: {Name:mk29d4e1a7d2159946e7b56dafdd8bb8179e6f8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:41:01.810416   59418 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:41:01.810579   59418 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21894-55448/.minikube/cache/linux/amd64/v1.34.1/kubectl
	
	
	* The control-plane node download-only-233430 host does not exist
	  To start a cluster, run: "minikube start -p download-only-233430"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-233430
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.42s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-722822 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-722822" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-722822
--- PASS: TestDownloadOnlyKic (0.42s)

                                                
                                    
x
+
TestBinaryMirror (0.83s)

                                                
                                                
=== RUN   TestBinaryMirror
I1115 09:41:05.121217   58962 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-602120 --alsologtostderr --binary-mirror http://127.0.0.1:40137 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-602120" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-602120
--- PASS: TestBinaryMirror (0.83s)

                                                
                                    
x
+
TestOffline (88.7s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-637291 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-637291 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m26.030457181s)
helpers_test.go:175: Cleaning up "offline-crio-637291" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-637291
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-637291: (2.666535999s)
--- PASS: TestOffline (88.70s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-209049
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-209049: exit status 85 (69.645586ms)

                                                
                                                
-- stdout --
	* Profile "addons-209049" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-209049"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-209049
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-209049: exit status 85 (68.721281ms)

                                                
                                                
-- stdout --
	* Profile "addons-209049" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-209049"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (167.45s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-209049 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-209049 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m47.449361719s)
--- PASS: TestAddons/Setup (167.45s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-209049 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-209049 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (12.47s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-209049 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-209049 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4d4b92e5-3f08-48f7-845f-c61019032b56] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4d4b92e5-3f08-48f7-845f-c61019032b56] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 12.032199505s
addons_test.go:694: (dbg) Run:  kubectl --context addons-209049 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-209049 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-209049 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (12.47s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.41s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-209049
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-209049: (12.118843585s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-209049
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-209049
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-209049
--- PASS: TestAddons/StoppedEnableDisable (12.41s)

                                                
                                    
x
+
TestCertOptions (28.19s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-535782 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-535782 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (25.49116088s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-535782 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-535782 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-535782 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-535782" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-535782
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-535782: (2.018441569s)
--- PASS: TestCertOptions (28.19s)

                                                
                                    
x
+
TestCertExpiration (231.58s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-971967 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-971967 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (33.598995801s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-971967 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-971967 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (15.345617389s)
helpers_test.go:175: Cleaning up "cert-expiration-971967" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-971967
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-971967: (2.631327462s)
--- PASS: TestCertExpiration (231.58s)

                                                
                                    
x
+
TestForceSystemdFlag (32.51s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-744529 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-744529 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (29.670283667s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-744529 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-744529" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-744529
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-744529: (2.514947093s)
--- PASS: TestForceSystemdFlag (32.51s)

                                                
                                    
x
+
TestForceSystemdEnv (35.46s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-701383 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-701383 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (32.683335615s)
helpers_test.go:175: Cleaning up "force-systemd-env-701383" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-701383
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-701383: (2.778786644s)
--- PASS: TestForceSystemdEnv (35.46s)

                                                
                                    
x
+
TestErrorSpam/setup (20.74s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-273583 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-273583 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-273583 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-273583 --driver=docker  --container-runtime=crio: (20.735657995s)
--- PASS: TestErrorSpam/setup (20.74s)

                                                
                                    
x
+
TestErrorSpam/start (0.66s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-273583 --log_dir /tmp/nospam-273583 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-273583 --log_dir /tmp/nospam-273583 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-273583 --log_dir /tmp/nospam-273583 start --dry-run
--- PASS: TestErrorSpam/start (0.66s)

                                                
                                    
x
+
TestErrorSpam/status (0.95s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-273583 --log_dir /tmp/nospam-273583 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-273583 --log_dir /tmp/nospam-273583 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-273583 --log_dir /tmp/nospam-273583 status
--- PASS: TestErrorSpam/status (0.95s)

                                                
                                    
x
+
TestErrorSpam/pause (6.18s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-273583 --log_dir /tmp/nospam-273583 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-273583 --log_dir /tmp/nospam-273583 pause: exit status 80 (2.434653011s)

                                                
                                                
-- stdout --
	* Pausing node nospam-273583 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:47:32Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-273583 --log_dir /tmp/nospam-273583 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-273583 --log_dir /tmp/nospam-273583 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-273583 --log_dir /tmp/nospam-273583 pause: exit status 80 (2.204817125s)

                                                
                                                
-- stdout --
	* Pausing node nospam-273583 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:47:34Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-273583 --log_dir /tmp/nospam-273583 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-273583 --log_dir /tmp/nospam-273583 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-273583 --log_dir /tmp/nospam-273583 pause: exit status 80 (1.544005308s)

                                                
                                                
-- stdout --
	* Pausing node nospam-273583 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:47:35Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-273583 --log_dir /tmp/nospam-273583 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.18s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.68s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-273583 --log_dir /tmp/nospam-273583 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-273583 --log_dir /tmp/nospam-273583 unpause: exit status 80 (2.007106812s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-273583 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:47:38Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-273583 --log_dir /tmp/nospam-273583 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-273583 --log_dir /tmp/nospam-273583 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-273583 --log_dir /tmp/nospam-273583 unpause: exit status 80 (1.841133794s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-273583 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:47:39Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-273583 --log_dir /tmp/nospam-273583 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-273583 --log_dir /tmp/nospam-273583 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-273583 --log_dir /tmp/nospam-273583 unpause: exit status 80 (1.83507949s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-273583 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:47:41Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-273583 --log_dir /tmp/nospam-273583 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.68s)

                                                
                                    
x
+
TestErrorSpam/stop (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-273583 --log_dir /tmp/nospam-273583 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-273583 --log_dir /tmp/nospam-273583 stop: (1.310452085s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-273583 --log_dir /tmp/nospam-273583 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-273583 --log_dir /tmp/nospam-273583 stop
--- PASS: TestErrorSpam/stop (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21894-55448/.minikube/files/etc/test/nested/copy/58962/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (73.46s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-169872 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1115 09:48:54.047215   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:48:54.053624   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:48:54.064995   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:48:54.086626   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:48:54.128003   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:48:54.209463   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:48:54.371044   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:48:54.692742   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:48:55.334759   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:48:56.616164   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:48:59.179094   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-169872 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m13.455718342s)
--- PASS: TestFunctional/serial/StartWithProxy (73.46s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (30.28s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1115 09:49:01.128832   58962 config.go:182] Loaded profile config "functional-169872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-169872 --alsologtostderr -v=8
E1115 09:49:04.301135   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:49:14.542620   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-169872 --alsologtostderr -v=8: (30.275456167s)
functional_test.go:678: soft start took 30.276157269s for "functional-169872" cluster.
I1115 09:49:31.404662   58962 config.go:182] Loaded profile config "functional-169872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (30.28s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-169872 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-169872 cache add registry.k8s.io/pause:3.3: (1.05333515s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-169872 /tmp/TestFunctionalserialCacheCmdcacheadd_local3022369132/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 cache add minikube-local-cache-test:functional-169872
E1115 09:49:35.024060   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-169872 cache add minikube-local-cache-test:functional-169872: (2.02252248s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 cache delete minikube-local-cache-test:functional-169872
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-169872
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-169872 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (283.491526ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 kubectl -- --context functional-169872 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-169872 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (27.36s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-169872 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-169872 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (27.36119398s)
functional_test.go:776: restart took 27.361431251s for "functional-169872" cluster.
I1115 09:50:06.676923   58962 config.go:182] Loaded profile config "functional-169872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (27.36s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-169872 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-169872 logs: (1.212721353s)
--- PASS: TestFunctional/serial/LogsCmd (1.21s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 logs --file /tmp/TestFunctionalserialLogsFileCmd1362235639/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-169872 logs --file /tmp/TestFunctionalserialLogsFileCmd1362235639/001/logs.txt: (1.22869699s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.23s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.98s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-169872 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-169872
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-169872: exit status 115 (350.896474ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30329 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-169872 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.98s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-169872 config get cpus: exit status 14 (96.488661ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-169872 config get cpus: exit status 14 (62.282572ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-169872 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-169872 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 97676: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.29s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-169872 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-169872 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (167.466907ms)

                                                
                                                
-- stdout --
	* [functional-169872] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21894
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21894-55448/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-55448/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:50:48.739437   97267 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:50:48.739549   97267 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:50:48.739561   97267 out.go:374] Setting ErrFile to fd 2...
	I1115 09:50:48.739568   97267 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:50:48.739788   97267 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 09:50:48.740219   97267 out.go:368] Setting JSON to false
	I1115 09:50:48.741283   97267 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":5586,"bootTime":1763194663,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 09:50:48.741377   97267 start.go:143] virtualization: kvm guest
	I1115 09:50:48.743371   97267 out.go:179] * [functional-169872] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 09:50:48.744633   97267 notify.go:221] Checking for updates...
	I1115 09:50:48.744656   97267 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 09:50:48.746035   97267 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:50:48.747182   97267 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 09:50:48.748289   97267 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-55448/.minikube
	I1115 09:50:48.752517   97267 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 09:50:48.753597   97267 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 09:50:48.755441   97267 config.go:182] Loaded profile config "functional-169872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:50:48.756148   97267 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:50:48.779906   97267 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 09:50:48.780015   97267 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:50:48.837858   97267 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:56 SystemTime:2025-11-15 09:50:48.828035313 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:50:48.837978   97267 docker.go:319] overlay module found
	I1115 09:50:48.839789   97267 out.go:179] * Using the docker driver based on existing profile
	I1115 09:50:48.840887   97267 start.go:309] selected driver: docker
	I1115 09:50:48.840903   97267 start.go:930] validating driver "docker" against &{Name:functional-169872 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-169872 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:50:48.841025   97267 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 09:50:48.843232   97267 out.go:203] 
	W1115 09:50:48.844470   97267 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1115 09:50:48.845782   97267 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-169872 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-169872 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-169872 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (172.845164ms)

                                                
                                                
-- stdout --
	* [functional-169872] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21894
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21894-55448/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-55448/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:50:48.567682   97182 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:50:48.567796   97182 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:50:48.567805   97182 out.go:374] Setting ErrFile to fd 2...
	I1115 09:50:48.567810   97182 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:50:48.568137   97182 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 09:50:48.568606   97182 out.go:368] Setting JSON to false
	I1115 09:50:48.569614   97182 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":5586,"bootTime":1763194663,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 09:50:48.569720   97182 start.go:143] virtualization: kvm guest
	I1115 09:50:48.571568   97182 out.go:179] * [functional-169872] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1115 09:50:48.572996   97182 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 09:50:48.573007   97182 notify.go:221] Checking for updates...
	I1115 09:50:48.575294   97182 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:50:48.576464   97182 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 09:50:48.577659   97182 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-55448/.minikube
	I1115 09:50:48.582451   97182 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 09:50:48.583513   97182 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 09:50:48.585271   97182 config.go:182] Loaded profile config "functional-169872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:50:48.585910   97182 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:50:48.609370   97182 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 09:50:48.609478   97182 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:50:48.669589   97182 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:56 SystemTime:2025-11-15 09:50:48.658924442 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:50:48.669705   97182 docker.go:319] overlay module found
	I1115 09:50:48.671547   97182 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1115 09:50:48.672841   97182 start.go:309] selected driver: docker
	I1115 09:50:48.672858   97182 start.go:930] validating driver "docker" against &{Name:functional-169872 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-169872 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:50:48.672945   97182 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 09:50:48.675004   97182 out.go:203] 
	W1115 09:50:48.676195   97182 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1115 09:50:48.677446   97182 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (37.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [c06c835c-0b7b-49f8-b88d-1aa1feedccb6] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003790749s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-169872 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-169872 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-169872 get pvc myclaim -o=json
I1115 09:50:20.581853   58962 retry.go:31] will retry after 2.147978469s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:902e7fa0-6cb1-4bc4-8674-830d28385de2 ResourceVersion:722 Generation:0 CreationTimestamp:2025-11-15 09:50:20 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0019c7840 VolumeMode:0xc0019c7850 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-169872 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-169872 apply -f testdata/storage-provisioner/pod.yaml
I1115 09:50:22.992193   58962 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [eadf2f64-1a06-40a1-aa46-d301b071759e] Pending
helpers_test.go:352: "sp-pod" [eadf2f64-1a06-40a1-aa46-d301b071759e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [eadf2f64-1a06-40a1-aa46-d301b071759e] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 19.003544482s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-169872 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-169872 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-169872 apply -f testdata/storage-provisioner/pod.yaml
I1115 09:50:43.159465   58962 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [95644f4a-6dd0-49bc-9d1c-e8ece73c804f] Pending
helpers_test.go:352: "sp-pod" [95644f4a-6dd0-49bc-9d1c-e8ece73c804f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [95644f4a-6dd0-49bc-9d1c-e8ece73c804f] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.017960351s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-169872 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (37.14s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 ssh -n functional-169872 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 cp functional-169872:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3529807669/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 ssh -n functional-169872 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 ssh -n functional-169872 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-169872 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-thgfq" [0400285d-593c-4e48-9fd5-23b03c7a5880] Pending
helpers_test.go:352: "mysql-5bb876957f-thgfq" [0400285d-593c-4e48-9fd5-23b03c7a5880] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-thgfq" [0400285d-593c-4e48-9fd5-23b03c7a5880] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.00383873s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-169872 exec mysql-5bb876957f-thgfq -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-169872 exec mysql-5bb876957f-thgfq -- mysql -ppassword -e "show databases;": exit status 1 (285.584635ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1115 09:50:31.690042   58962 retry.go:31] will retry after 687.927954ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-169872 exec mysql-5bb876957f-thgfq -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-169872 exec mysql-5bb876957f-thgfq -- mysql -ppassword -e "show databases;": exit status 1 (128.664531ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1115 09:50:32.507708   58962 retry.go:31] will retry after 1.739868409s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-169872 exec mysql-5bb876957f-thgfq -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.23s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/58962/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 ssh "sudo cat /etc/test/nested/copy/58962/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/58962.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 ssh "sudo cat /etc/ssl/certs/58962.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/58962.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 ssh "sudo cat /usr/share/ca-certificates/58962.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/589622.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 ssh "sudo cat /etc/ssl/certs/589622.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/589622.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 ssh "sudo cat /usr/share/ca-certificates/589622.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-169872 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-169872 ssh "sudo systemctl is-active docker": exit status 1 (277.469843ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-169872 ssh "sudo systemctl is-active containerd": exit status 1 (280.242972ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-169872 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-169872 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-169872 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-169872 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 92175: os: process already finished
helpers_test.go:525: unable to kill pid 91791: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-169872 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (19.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-169872 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [c38976fc-98f7-4654-a0da-b4bd10668f08] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [c38976fc-98f7-4654-a0da-b4bd10668f08] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 19.006362056s
I1115 09:50:34.079163   58962 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (19.25s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-169872 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.218.122 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-169872 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "337.202265ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "61.721232ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "327.650606ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "59.920969ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-169872 /tmp/TestFunctionalparallelMountCmdany-port3385608388/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763200235606311467" to /tmp/TestFunctionalparallelMountCmdany-port3385608388/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763200235606311467" to /tmp/TestFunctionalparallelMountCmdany-port3385608388/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763200235606311467" to /tmp/TestFunctionalparallelMountCmdany-port3385608388/001/test-1763200235606311467
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-169872 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (277.989367ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1115 09:50:35.884604   58962 retry.go:31] will retry after 407.458056ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 15 09:50 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 15 09:50 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 15 09:50 test-1763200235606311467
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 ssh cat /mount-9p/test-1763200235606311467
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-169872 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [c71c0598-2c57-4544-a65f-fc9224e37156] Pending
helpers_test.go:352: "busybox-mount" [c71c0598-2c57-4544-a65f-fc9224e37156] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [c71c0598-2c57-4544-a65f-fc9224e37156] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [c71c0598-2c57-4544-a65f-fc9224e37156] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.002911947s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-169872 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-169872 /tmp/TestFunctionalparallelMountCmdany-port3385608388/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.60s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-169872 /tmp/TestFunctionalparallelMountCmdspecific-port430148380/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-169872 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (277.974201ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1115 09:50:44.483463   58962 retry.go:31] will retry after 278.995253ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-169872 /tmp/TestFunctionalparallelMountCmdspecific-port430148380/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-169872 ssh "sudo umount -f /mount-9p": exit status 1 (269.239216ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-169872 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-169872 /tmp/TestFunctionalparallelMountCmdspecific-port430148380/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-169872 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2391987357/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-169872 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2391987357/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-169872 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2391987357/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-169872 ssh "findmnt -T" /mount1: exit status 1 (343.215149ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1115 09:50:46.130021   58962 retry.go:31] will retry after 532.794898ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-169872 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-169872 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2391987357/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-169872 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2391987357/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-169872 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2391987357/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-169872 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-169872 image ls --format short --alsologtostderr:
I1115 09:51:01.283629   98981 out.go:360] Setting OutFile to fd 1 ...
I1115 09:51:01.283929   98981 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:51:01.283940   98981 out.go:374] Setting ErrFile to fd 2...
I1115 09:51:01.283945   98981 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:51:01.284256   98981 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
I1115 09:51:01.284880   98981 config.go:182] Loaded profile config "functional-169872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:51:01.284987   98981 config.go:182] Loaded profile config "functional-169872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:51:01.285439   98981 cli_runner.go:164] Run: docker container inspect functional-169872 --format={{.State.Status}}
I1115 09:51:01.306511   98981 ssh_runner.go:195] Run: systemctl --version
I1115 09:51:01.306602   98981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-169872
I1115 09:51:01.326332   98981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/functional-169872/id_rsa Username:docker}
I1115 09:51:01.419355   98981 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-169872 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/library/nginx                 │ latest             │ d261fd19cb632 │ 155MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ docker.io/library/nginx                 │ alpine             │ d4918ca78576a │ 54.3MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-169872 image ls --format table --alsologtostderr:
I1115 09:51:01.512275   99080 out.go:360] Setting OutFile to fd 1 ...
I1115 09:51:01.512416   99080 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:51:01.512435   99080 out.go:374] Setting ErrFile to fd 2...
I1115 09:51:01.512441   99080 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:51:01.512658   99080 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
I1115 09:51:01.513236   99080 config.go:182] Loaded profile config "functional-169872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:51:01.513331   99080 config.go:182] Loaded profile config "functional-169872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:51:01.513801   99080 cli_runner.go:164] Run: docker container inspect functional-169872 --format={{.State.Status}}
I1115 09:51:01.533151   99080 ssh_runner.go:195] Run: systemctl --version
I1115 09:51:01.533218   99080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-169872
I1115 09:51:01.553327   99080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/functional-169872/id_rsa Username:docker}
I1115 09:51:01.648329   99080 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-169872 image ls --format json --alsologtostderr:
[{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54252718"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":
["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"4094
67f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb2
8c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"d261fd19cb63238535ab80d4e1be1d9e7f6c8b5a28a820188968dd3e6f06072d","repoDigests":["docker.io/library/nginx@sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad","docker.io/library/nginx@sha256:bd1578eec775d0b28fd7f664b182b7e1fb75f1dd09f92d865dababe8525dfe8b"],"repoTags":["docker.io/library/nginx:latest"],"size":"155489797"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.
3"],"size":"686139"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f97
7ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoT
ags":[],"size":"43824855"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-169872 image ls --format json --alsologtostderr:
I1115 09:51:01.323558   98994 out.go:360] Setting OutFile to fd 1 ...
I1115 09:51:01.323682   98994 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:51:01.323696   98994 out.go:374] Setting ErrFile to fd 2...
I1115 09:51:01.323700   98994 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:51:01.323913   98994 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
I1115 09:51:01.324501   98994 config.go:182] Loaded profile config "functional-169872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:51:01.324596   98994 config.go:182] Loaded profile config "functional-169872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:51:01.325065   98994 cli_runner.go:164] Run: docker container inspect functional-169872 --format={{.State.Status}}
I1115 09:51:01.343603   98994 ssh_runner.go:195] Run: systemctl --version
I1115 09:51:01.343688   98994 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-169872
I1115 09:51:01.363719   98994 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/functional-169872/id_rsa Username:docker}
I1115 09:51:01.457536   98994 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-169872 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: d261fd19cb63238535ab80d4e1be1d9e7f6c8b5a28a820188968dd3e6f06072d
repoDigests:
- docker.io/library/nginx@sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad
- docker.io/library/nginx@sha256:bd1578eec775d0b28fd7f664b182b7e1fb75f1dd09f92d865dababe8525dfe8b
repoTags:
- docker.io/library/nginx:latest
size: "155489797"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54252718"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-169872 image ls --format yaml --alsologtostderr:
I1115 09:51:01.554817   99103 out.go:360] Setting OutFile to fd 1 ...
I1115 09:51:01.555093   99103 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:51:01.555111   99103 out.go:374] Setting ErrFile to fd 2...
I1115 09:51:01.555115   99103 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:51:01.555309   99103 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
I1115 09:51:01.555909   99103 config.go:182] Loaded profile config "functional-169872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:51:01.556038   99103 config.go:182] Loaded profile config "functional-169872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:51:01.556433   99103 cli_runner.go:164] Run: docker container inspect functional-169872 --format={{.State.Status}}
I1115 09:51:01.574841   99103 ssh_runner.go:195] Run: systemctl --version
I1115 09:51:01.574888   99103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-169872
I1115 09:51:01.593596   99103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/functional-169872/id_rsa Username:docker}
I1115 09:51:01.689107   99103 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-169872 ssh pgrep buildkitd: exit status 1 (293.634304ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 image build -t localhost/my-image:functional-169872 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-169872 image build -t localhost/my-image:functional-169872 testdata/build --alsologtostderr: (6.479537251s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-169872 image build -t localhost/my-image:functional-169872 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 77e55725b06
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-169872
--> 870c876d79e
Successfully tagged localhost/my-image:functional-169872
870c876d79eefecd132a03c9dbde7eb41a871d0c61e7cec836056dc4f440dd1c
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-169872 image build -t localhost/my-image:functional-169872 testdata/build --alsologtostderr:
I1115 09:51:02.038609   99404 out.go:360] Setting OutFile to fd 1 ...
I1115 09:51:02.038766   99404 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:51:02.038777   99404 out.go:374] Setting ErrFile to fd 2...
I1115 09:51:02.038783   99404 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:51:02.038982   99404 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
I1115 09:51:02.039578   99404 config.go:182] Loaded profile config "functional-169872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:51:02.040268   99404 config.go:182] Loaded profile config "functional-169872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:51:02.040689   99404 cli_runner.go:164] Run: docker container inspect functional-169872 --format={{.State.Status}}
I1115 09:51:02.059290   99404 ssh_runner.go:195] Run: systemctl --version
I1115 09:51:02.059357   99404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-169872
I1115 09:51:02.078172   99404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/functional-169872/id_rsa Username:docker}
I1115 09:51:02.172606   99404 build_images.go:162] Building image from path: /tmp/build.1760255109.tar
I1115 09:51:02.172704   99404 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1115 09:51:02.181327   99404 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1760255109.tar
I1115 09:51:02.185212   99404 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1760255109.tar: stat -c "%s %y" /var/lib/minikube/build/build.1760255109.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1760255109.tar': No such file or directory
I1115 09:51:02.185238   99404 ssh_runner.go:362] scp /tmp/build.1760255109.tar --> /var/lib/minikube/build/build.1760255109.tar (3072 bytes)
I1115 09:51:02.204214   99404 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1760255109
I1115 09:51:02.212627   99404 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1760255109 -xf /var/lib/minikube/build/build.1760255109.tar
I1115 09:51:02.220722   99404 crio.go:315] Building image: /var/lib/minikube/build/build.1760255109
I1115 09:51:02.220825   99404 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-169872 /var/lib/minikube/build/build.1760255109 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1115 09:51:08.434095   99404 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-169872 /var/lib/minikube/build/build.1760255109 --cgroup-manager=cgroupfs: (6.213233095s)
I1115 09:51:08.434185   99404 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1760255109
I1115 09:51:08.442447   99404 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1760255109.tar
I1115 09:51:08.450326   99404 build_images.go:218] Built localhost/my-image:functional-169872 from /tmp/build.1760255109.tar
I1115 09:51:08.450366   99404 build_images.go:134] succeeded building to: functional-169872
I1115 09:51:08.450373   99404 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 image ls
E1115 09:51:37.907883   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:53:54.047205   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:54:21.749726   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:58:54.047345   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.738028527s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-169872
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 image rm kicbase/echo-server:functional-169872 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-169872 service list: (1.706842907s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-169872 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-169872 service list -o json: (1.703951313s)
functional_test.go:1504: Took "1.704079455s" to run "out/minikube-linux-amd64 -p functional-169872 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.70s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-169872
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-169872
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-169872
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (169.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-828560 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m48.547955673s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (169.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-828560 kubectl -- rollout status deployment/busybox: (5.369387574s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 kubectl -- exec busybox-7b57f96db7-6wk5f -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 kubectl -- exec busybox-7b57f96db7-k2wbj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 kubectl -- exec busybox-7b57f96db7-rxnjt -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 kubectl -- exec busybox-7b57f96db7-6wk5f -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 kubectl -- exec busybox-7b57f96db7-k2wbj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 kubectl -- exec busybox-7b57f96db7-rxnjt -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 kubectl -- exec busybox-7b57f96db7-6wk5f -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 kubectl -- exec busybox-7b57f96db7-k2wbj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 kubectl -- exec busybox-7b57f96db7-rxnjt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 kubectl -- exec busybox-7b57f96db7-6wk5f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 kubectl -- exec busybox-7b57f96db7-6wk5f -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 kubectl -- exec busybox-7b57f96db7-k2wbj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 kubectl -- exec busybox-7b57f96db7-k2wbj -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 kubectl -- exec busybox-7b57f96db7-rxnjt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 kubectl -- exec busybox-7b57f96db7-rxnjt -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (27.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 node add --alsologtostderr -v 5
E1115 10:03:54.047184   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-828560 node add --alsologtostderr -v 5: (26.537694754s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (27.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-828560 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 cp testdata/cp-test.txt ha-828560:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 ssh -n ha-828560 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 cp ha-828560:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3601471745/001/cp-test_ha-828560.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 ssh -n ha-828560 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 cp ha-828560:/home/docker/cp-test.txt ha-828560-m02:/home/docker/cp-test_ha-828560_ha-828560-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 ssh -n ha-828560 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 ssh -n ha-828560-m02 "sudo cat /home/docker/cp-test_ha-828560_ha-828560-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 cp ha-828560:/home/docker/cp-test.txt ha-828560-m03:/home/docker/cp-test_ha-828560_ha-828560-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 ssh -n ha-828560 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 ssh -n ha-828560-m03 "sudo cat /home/docker/cp-test_ha-828560_ha-828560-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 cp ha-828560:/home/docker/cp-test.txt ha-828560-m04:/home/docker/cp-test_ha-828560_ha-828560-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 ssh -n ha-828560 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 ssh -n ha-828560-m04 "sudo cat /home/docker/cp-test_ha-828560_ha-828560-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 cp testdata/cp-test.txt ha-828560-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 ssh -n ha-828560-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 cp ha-828560-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3601471745/001/cp-test_ha-828560-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 ssh -n ha-828560-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 cp ha-828560-m02:/home/docker/cp-test.txt ha-828560:/home/docker/cp-test_ha-828560-m02_ha-828560.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 ssh -n ha-828560-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 ssh -n ha-828560 "sudo cat /home/docker/cp-test_ha-828560-m02_ha-828560.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 cp ha-828560-m02:/home/docker/cp-test.txt ha-828560-m03:/home/docker/cp-test_ha-828560-m02_ha-828560-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 ssh -n ha-828560-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 ssh -n ha-828560-m03 "sudo cat /home/docker/cp-test_ha-828560-m02_ha-828560-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 cp ha-828560-m02:/home/docker/cp-test.txt ha-828560-m04:/home/docker/cp-test_ha-828560-m02_ha-828560-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 ssh -n ha-828560-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 ssh -n ha-828560-m04 "sudo cat /home/docker/cp-test_ha-828560-m02_ha-828560-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 cp testdata/cp-test.txt ha-828560-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 ssh -n ha-828560-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 cp ha-828560-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3601471745/001/cp-test_ha-828560-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 ssh -n ha-828560-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 cp ha-828560-m03:/home/docker/cp-test.txt ha-828560:/home/docker/cp-test_ha-828560-m03_ha-828560.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 ssh -n ha-828560-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 ssh -n ha-828560 "sudo cat /home/docker/cp-test_ha-828560-m03_ha-828560.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 cp ha-828560-m03:/home/docker/cp-test.txt ha-828560-m02:/home/docker/cp-test_ha-828560-m03_ha-828560-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 ssh -n ha-828560-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 ssh -n ha-828560-m02 "sudo cat /home/docker/cp-test_ha-828560-m03_ha-828560-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 cp ha-828560-m03:/home/docker/cp-test.txt ha-828560-m04:/home/docker/cp-test_ha-828560-m03_ha-828560-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 ssh -n ha-828560-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 ssh -n ha-828560-m04 "sudo cat /home/docker/cp-test_ha-828560-m03_ha-828560-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 cp testdata/cp-test.txt ha-828560-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 ssh -n ha-828560-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 cp ha-828560-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3601471745/001/cp-test_ha-828560-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 ssh -n ha-828560-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 cp ha-828560-m04:/home/docker/cp-test.txt ha-828560:/home/docker/cp-test_ha-828560-m04_ha-828560.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 ssh -n ha-828560-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 ssh -n ha-828560 "sudo cat /home/docker/cp-test_ha-828560-m04_ha-828560.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 cp ha-828560-m04:/home/docker/cp-test.txt ha-828560-m02:/home/docker/cp-test_ha-828560-m04_ha-828560-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 ssh -n ha-828560-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 ssh -n ha-828560-m02 "sudo cat /home/docker/cp-test_ha-828560-m04_ha-828560-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 cp ha-828560-m04:/home/docker/cp-test.txt ha-828560-m03:/home/docker/cp-test_ha-828560-m04_ha-828560-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 ssh -n ha-828560-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 ssh -n ha-828560-m03 "sudo cat /home/docker/cp-test_ha-828560-m04_ha-828560-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-828560 node stop m02 --alsologtostderr -v 5: (12.038701594s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-828560 status --alsologtostderr -v 5: exit status 7 (709.073514ms)

                                                
                                                
-- stdout --
	ha-828560
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-828560-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-828560-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-828560-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:04:40.982599  124481 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:04:40.982854  124481 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:04:40.982863  124481 out.go:374] Setting ErrFile to fd 2...
	I1115 10:04:40.982867  124481 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:04:40.983060  124481 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 10:04:40.983267  124481 out.go:368] Setting JSON to false
	I1115 10:04:40.983311  124481 mustload.go:66] Loading cluster: ha-828560
	I1115 10:04:40.983386  124481 notify.go:221] Checking for updates...
	I1115 10:04:40.983880  124481 config.go:182] Loaded profile config "ha-828560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:04:40.983903  124481 status.go:174] checking status of ha-828560 ...
	I1115 10:04:40.984436  124481 cli_runner.go:164] Run: docker container inspect ha-828560 --format={{.State.Status}}
	I1115 10:04:41.004046  124481 status.go:371] ha-828560 host status = "Running" (err=<nil>)
	I1115 10:04:41.004080  124481 host.go:66] Checking if "ha-828560" exists ...
	I1115 10:04:41.004480  124481 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828560
	I1115 10:04:41.022717  124481 host.go:66] Checking if "ha-828560" exists ...
	I1115 10:04:41.022996  124481 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:04:41.023036  124481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828560
	I1115 10:04:41.043573  124481 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/ha-828560/id_rsa Username:docker}
	I1115 10:04:41.138872  124481 ssh_runner.go:195] Run: systemctl --version
	I1115 10:04:41.146177  124481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:04:41.159279  124481 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:04:41.222801  124481 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:75 SystemTime:2025-11-15 10:04:41.211161476 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:04:41.223605  124481 kubeconfig.go:125] found "ha-828560" server: "https://192.168.49.254:8443"
	I1115 10:04:41.223644  124481 api_server.go:166] Checking apiserver status ...
	I1115 10:04:41.223693  124481 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:04:41.235992  124481 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1368/cgroup
	I1115 10:04:41.244853  124481 api_server.go:182] apiserver freezer: "4:freezer:/docker/d93cdf81cfce2a0c8aeae30a9ad85ab5ae0fdd39ac28dc579fd30edef30136e5/crio/crio-d0d3b210a4c7952004b8d99358ce83803c52dc4292b753b90c91c77f0f684926"
	I1115 10:04:41.244906  124481 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d93cdf81cfce2a0c8aeae30a9ad85ab5ae0fdd39ac28dc579fd30edef30136e5/crio/crio-d0d3b210a4c7952004b8d99358ce83803c52dc4292b753b90c91c77f0f684926/freezer.state
	I1115 10:04:41.252416  124481 api_server.go:204] freezer state: "THAWED"
	I1115 10:04:41.252442  124481 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1115 10:04:41.256563  124481 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1115 10:04:41.256592  124481 status.go:463] ha-828560 apiserver status = Running (err=<nil>)
	I1115 10:04:41.256605  124481 status.go:176] ha-828560 status: &{Name:ha-828560 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 10:04:41.256641  124481 status.go:174] checking status of ha-828560-m02 ...
	I1115 10:04:41.256935  124481 cli_runner.go:164] Run: docker container inspect ha-828560-m02 --format={{.State.Status}}
	I1115 10:04:41.275311  124481 status.go:371] ha-828560-m02 host status = "Stopped" (err=<nil>)
	I1115 10:04:41.275335  124481 status.go:384] host is not running, skipping remaining checks
	I1115 10:04:41.275344  124481 status.go:176] ha-828560-m02 status: &{Name:ha-828560-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 10:04:41.275369  124481 status.go:174] checking status of ha-828560-m03 ...
	I1115 10:04:41.275613  124481 cli_runner.go:164] Run: docker container inspect ha-828560-m03 --format={{.State.Status}}
	I1115 10:04:41.294336  124481 status.go:371] ha-828560-m03 host status = "Running" (err=<nil>)
	I1115 10:04:41.294362  124481 host.go:66] Checking if "ha-828560-m03" exists ...
	I1115 10:04:41.294621  124481 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828560-m03
	I1115 10:04:41.312108  124481 host.go:66] Checking if "ha-828560-m03" exists ...
	I1115 10:04:41.312353  124481 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:04:41.312390  124481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828560-m03
	I1115 10:04:41.332252  124481 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/ha-828560-m03/id_rsa Username:docker}
	I1115 10:04:41.423770  124481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:04:41.436472  124481 kubeconfig.go:125] found "ha-828560" server: "https://192.168.49.254:8443"
	I1115 10:04:41.436501  124481 api_server.go:166] Checking apiserver status ...
	I1115 10:04:41.436538  124481 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:04:41.447814  124481 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1295/cgroup
	I1115 10:04:41.456339  124481 api_server.go:182] apiserver freezer: "4:freezer:/docker/24cc9af0b6ed69af49d62c84e879402dad5e05cdf574f43cc6fb0eb504ca4309/crio/crio-d150f9c46a5a19e0e2515ef3312b80a68f9fa88221394a3735df7c71a3f488b0"
	I1115 10:04:41.456405  124481 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/24cc9af0b6ed69af49d62c84e879402dad5e05cdf574f43cc6fb0eb504ca4309/crio/crio-d150f9c46a5a19e0e2515ef3312b80a68f9fa88221394a3735df7c71a3f488b0/freezer.state
	I1115 10:04:41.463887  124481 api_server.go:204] freezer state: "THAWED"
	I1115 10:04:41.463925  124481 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1115 10:04:41.468110  124481 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1115 10:04:41.468136  124481 status.go:463] ha-828560-m03 apiserver status = Running (err=<nil>)
	I1115 10:04:41.468146  124481 status.go:176] ha-828560-m03 status: &{Name:ha-828560-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 10:04:41.468161  124481 status.go:174] checking status of ha-828560-m04 ...
	I1115 10:04:41.468404  124481 cli_runner.go:164] Run: docker container inspect ha-828560-m04 --format={{.State.Status}}
	I1115 10:04:41.486347  124481 status.go:371] ha-828560-m04 host status = "Running" (err=<nil>)
	I1115 10:04:41.486371  124481 host.go:66] Checking if "ha-828560-m04" exists ...
	I1115 10:04:41.486632  124481 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828560-m04
	I1115 10:04:41.505981  124481 host.go:66] Checking if "ha-828560-m04" exists ...
	I1115 10:04:41.506331  124481 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:04:41.506393  124481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828560-m04
	I1115 10:04:41.524588  124481 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/ha-828560-m04/id_rsa Username:docker}
	I1115 10:04:41.615493  124481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:04:41.627673  124481 status.go:176] ha-828560-m04 status: &{Name:ha-828560-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (28.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-828560 node start m02 --alsologtostderr -v 5: (27.403463706s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (28.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.263360488s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (123.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 stop --alsologtostderr -v 5
E1115 10:05:13.400900   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/functional-169872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:05:13.407299   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/functional-169872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:05:13.418653   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/functional-169872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:05:13.440418   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/functional-169872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:05:13.482661   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/functional-169872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:05:13.564976   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/functional-169872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:05:13.726500   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/functional-169872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:05:14.048312   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/functional-169872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:05:14.690410   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/functional-169872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:05:15.972523   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/functional-169872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:05:17.112075   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:05:18.534783   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/functional-169872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:05:23.656529   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/functional-169872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:05:33.898205   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/functional-169872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-828560 stop --alsologtostderr -v 5: (37.981752921s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 start --wait true --alsologtostderr -v 5
E1115 10:05:54.379675   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/functional-169872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:06:35.341502   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/functional-169872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-828560 start --wait true --alsologtostderr -v 5: (1m25.840815252s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (123.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-828560 node delete m03 --alsologtostderr -v 5: (9.927941606s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 stop --alsologtostderr -v 5
E1115 10:07:57.265499   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/functional-169872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-828560 stop --alsologtostderr -v 5: (36.686104673s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-828560 status --alsologtostderr -v 5: exit status 7 (120.267462ms)

                                                
                                                
-- stdout --
	ha-828560
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-828560-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-828560-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:08:04.243516  139293 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:08:04.243640  139293 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:08:04.243649  139293 out.go:374] Setting ErrFile to fd 2...
	I1115 10:08:04.243653  139293 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:08:04.243830  139293 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 10:08:04.243988  139293 out.go:368] Setting JSON to false
	I1115 10:08:04.244018  139293 mustload.go:66] Loading cluster: ha-828560
	I1115 10:08:04.244118  139293 notify.go:221] Checking for updates...
	I1115 10:08:04.244459  139293 config.go:182] Loaded profile config "ha-828560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:08:04.244478  139293 status.go:174] checking status of ha-828560 ...
	I1115 10:08:04.244981  139293 cli_runner.go:164] Run: docker container inspect ha-828560 --format={{.State.Status}}
	I1115 10:08:04.263928  139293 status.go:371] ha-828560 host status = "Stopped" (err=<nil>)
	I1115 10:08:04.263964  139293 status.go:384] host is not running, skipping remaining checks
	I1115 10:08:04.263974  139293 status.go:176] ha-828560 status: &{Name:ha-828560 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 10:08:04.264000  139293 status.go:174] checking status of ha-828560-m02 ...
	I1115 10:08:04.264252  139293 cli_runner.go:164] Run: docker container inspect ha-828560-m02 --format={{.State.Status}}
	I1115 10:08:04.281988  139293 status.go:371] ha-828560-m02 host status = "Stopped" (err=<nil>)
	I1115 10:08:04.282023  139293 status.go:384] host is not running, skipping remaining checks
	I1115 10:08:04.282035  139293 status.go:176] ha-828560-m02 status: &{Name:ha-828560-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 10:08:04.282071  139293 status.go:174] checking status of ha-828560-m04 ...
	I1115 10:08:04.282367  139293 cli_runner.go:164] Run: docker container inspect ha-828560-m04 --format={{.State.Status}}
	I1115 10:08:04.300087  139293 status.go:371] ha-828560-m04 host status = "Stopped" (err=<nil>)
	I1115 10:08:04.300107  139293 status.go:384] host is not running, skipping remaining checks
	I1115 10:08:04.300114  139293 status.go:176] ha-828560-m04 status: &{Name:ha-828560-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (116.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1115 10:08:54.047275   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-828560 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m55.829860439s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (116.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (43.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 node add --control-plane --alsologtostderr -v 5
E1115 10:10:13.401299   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/functional-169872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:10:41.107120   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/functional-169872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-828560 node add --control-plane --alsologtostderr -v 5: (42.260352902s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-828560 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (43.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                    
x
+
TestJSONOutput/start/Command (72.33s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-387139 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-387139 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m12.327294224s)
--- PASS: TestJSONOutput/start/Command (72.33s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-387139 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-387139 --output=json --user=testUser: (5.845437063s)
--- PASS: TestJSONOutput/stop/Command (5.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-416439 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-416439 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (75.334326ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c73cc547-9dc8-44c7-b1f7-0f481749c10e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-416439] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"02b7d213-e632-43ef-9c1f-fb9fffdb4ad8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21894"}}
	{"specversion":"1.0","id":"cd0a40fe-0339-4d36-80e2-99e8cf42ba6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2a484377-9e4a-4ee5-9474-6e989aa98fdd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21894-55448/kubeconfig"}}
	{"specversion":"1.0","id":"629633df-ac53-419b-b32a-36a4f5b24a78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-55448/.minikube"}}
	{"specversion":"1.0","id":"376925be-7d1b-4a01-8dbc-c4ee0b0ea3a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"cba1d141-42bd-4b31-8477-d8a49cb127ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"df54ed1b-c87e-46cd-8c7f-41ac22fdcdd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-416439" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-416439
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.33s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-849415 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-849415 --network=: (37.169914419s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-849415" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-849415
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-849415: (2.14206758s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.33s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.82s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-227676 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-227676 --network=bridge: (22.789732917s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-227676" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-227676
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-227676: (2.014210724s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.82s)

                                                
                                    
x
+
TestKicExistingNetwork (25.78s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1115 10:13:23.827926   58962 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1115 10:13:23.844418   58962 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1115 10:13:23.844499   58962 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1115 10:13:23.844523   58962 cli_runner.go:164] Run: docker network inspect existing-network
W1115 10:13:23.860896   58962 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1115 10:13:23.860928   58962 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1115 10:13:23.860961   58962 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1115 10:13:23.861127   58962 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1115 10:13:23.878308   58962 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-77644897380e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:42:f9:e2:83:c3:91} reservation:<nil>}
I1115 10:13:23.878700   58962 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e89230}
I1115 10:13:23.878724   58962 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1115 10:13:23.878764   58962 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1115 10:13:23.925505   58962 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-531255 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-531255 --network=existing-network: (23.628609658s)
helpers_test.go:175: Cleaning up "existing-network-531255" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-531255
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-531255: (2.023462229s)
I1115 10:13:49.595031   58962 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.78s)

                                                
                                    
x
+
TestKicCustomSubnet (26.88s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-198931 --subnet=192.168.60.0/24
E1115 10:13:54.046811   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-198931 --subnet=192.168.60.0/24: (24.724803439s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-198931 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-198931" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-198931
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-198931: (2.133988868s)
--- PASS: TestKicCustomSubnet (26.88s)

                                                
                                    
x
+
TestKicStaticIP (28.49s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-757030 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-757030 --static-ip=192.168.200.200: (26.176508553s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-757030 ip
helpers_test.go:175: Cleaning up "static-ip-757030" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-757030
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-757030: (2.164596248s)
--- PASS: TestKicStaticIP (28.49s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (52.58s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-830661 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-830661 --driver=docker  --container-runtime=crio: (24.348247791s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-833340 --driver=docker  --container-runtime=crio
E1115 10:15:13.403150   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/functional-169872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-833340 --driver=docker  --container-runtime=crio: (23.06750469s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-830661
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-833340
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-833340" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-833340
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-833340: (1.966389783s)
helpers_test.go:175: Cleaning up "first-830661" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-830661
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-830661: (1.962294447s)
--- PASS: TestMinikubeProfile (52.58s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.61s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-389866 --memory=3072 --mount-string /tmp/TestMountStartserial3193221159/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-389866 --memory=3072 --mount-string /tmp/TestMountStartserial3193221159/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.608784456s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-389866 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.8s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-408957 --memory=3072 --mount-string /tmp/TestMountStartserial3193221159/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-408957 --memory=3072 --mount-string /tmp/TestMountStartserial3193221159/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.800991264s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-408957 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-389866 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-389866 --alsologtostderr -v=5: (1.706102497s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-408957 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-408957
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-408957: (1.286980856s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.01s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-408957
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-408957: (7.01213498s)
--- PASS: TestMountStart/serial/RestartStopped (8.01s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-408957 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (130.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-560414 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-560414 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m9.83019526s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (130.33s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (7.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-560414 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-560414 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-560414 -- rollout status deployment/busybox: (5.451456875s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-560414 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-560414 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-560414 -- exec busybox-7b57f96db7-8rg59 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-560414 -- exec busybox-7b57f96db7-mn4cq -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-560414 -- exec busybox-7b57f96db7-8rg59 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-560414 -- exec busybox-7b57f96db7-mn4cq -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-560414 -- exec busybox-7b57f96db7-8rg59 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-560414 -- exec busybox-7b57f96db7-mn4cq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (7.26s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-560414 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-560414 -- exec busybox-7b57f96db7-8rg59 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-560414 -- exec busybox-7b57f96db7-8rg59 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-560414 -- exec busybox-7b57f96db7-mn4cq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-560414 -- exec busybox-7b57f96db7-mn4cq -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (24.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-560414 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-560414 -v=5 --alsologtostderr: (23.879568202s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (24.52s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-560414 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.66s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 cp testdata/cp-test.txt multinode-560414:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 ssh -n multinode-560414 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 cp multinode-560414:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1144288329/001/cp-test_multinode-560414.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 ssh -n multinode-560414 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 cp multinode-560414:/home/docker/cp-test.txt multinode-560414-m02:/home/docker/cp-test_multinode-560414_multinode-560414-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 ssh -n multinode-560414 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 ssh -n multinode-560414-m02 "sudo cat /home/docker/cp-test_multinode-560414_multinode-560414-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 cp multinode-560414:/home/docker/cp-test.txt multinode-560414-m03:/home/docker/cp-test_multinode-560414_multinode-560414-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 ssh -n multinode-560414 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 ssh -n multinode-560414-m03 "sudo cat /home/docker/cp-test_multinode-560414_multinode-560414-m03.txt"
E1115 10:18:54.047729   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 cp testdata/cp-test.txt multinode-560414-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 ssh -n multinode-560414-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 cp multinode-560414-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1144288329/001/cp-test_multinode-560414-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 ssh -n multinode-560414-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 cp multinode-560414-m02:/home/docker/cp-test.txt multinode-560414:/home/docker/cp-test_multinode-560414-m02_multinode-560414.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 ssh -n multinode-560414-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 ssh -n multinode-560414 "sudo cat /home/docker/cp-test_multinode-560414-m02_multinode-560414.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 cp multinode-560414-m02:/home/docker/cp-test.txt multinode-560414-m03:/home/docker/cp-test_multinode-560414-m02_multinode-560414-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 ssh -n multinode-560414-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 ssh -n multinode-560414-m03 "sudo cat /home/docker/cp-test_multinode-560414-m02_multinode-560414-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 cp testdata/cp-test.txt multinode-560414-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 ssh -n multinode-560414-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 cp multinode-560414-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1144288329/001/cp-test_multinode-560414-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 ssh -n multinode-560414-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 cp multinode-560414-m03:/home/docker/cp-test.txt multinode-560414:/home/docker/cp-test_multinode-560414-m03_multinode-560414.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 ssh -n multinode-560414-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 ssh -n multinode-560414 "sudo cat /home/docker/cp-test_multinode-560414-m03_multinode-560414.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 cp multinode-560414-m03:/home/docker/cp-test.txt multinode-560414-m02:/home/docker/cp-test_multinode-560414-m03_multinode-560414-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 ssh -n multinode-560414-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 ssh -n multinode-560414-m02 "sudo cat /home/docker/cp-test_multinode-560414-m03_multinode-560414-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.67s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-560414 node stop m03: (1.300092362s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-560414 status: exit status 7 (496.819741ms)

                                                
                                                
-- stdout --
	multinode-560414
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-560414-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-560414-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-560414 status --alsologtostderr: exit status 7 (492.483974ms)

                                                
                                                
-- stdout --
	multinode-560414
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-560414-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-560414-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:19:02.105661  203158 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:19:02.105769  203158 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:19:02.105782  203158 out.go:374] Setting ErrFile to fd 2...
	I1115 10:19:02.105788  203158 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:19:02.106028  203158 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 10:19:02.106202  203158 out.go:368] Setting JSON to false
	I1115 10:19:02.106234  203158 mustload.go:66] Loading cluster: multinode-560414
	I1115 10:19:02.106344  203158 notify.go:221] Checking for updates...
	I1115 10:19:02.106598  203158 config.go:182] Loaded profile config "multinode-560414": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:19:02.106611  203158 status.go:174] checking status of multinode-560414 ...
	I1115 10:19:02.107038  203158 cli_runner.go:164] Run: docker container inspect multinode-560414 --format={{.State.Status}}
	I1115 10:19:02.126117  203158 status.go:371] multinode-560414 host status = "Running" (err=<nil>)
	I1115 10:19:02.126145  203158 host.go:66] Checking if "multinode-560414" exists ...
	I1115 10:19:02.126415  203158 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-560414
	I1115 10:19:02.144036  203158 host.go:66] Checking if "multinode-560414" exists ...
	I1115 10:19:02.144301  203158 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:19:02.144342  203158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-560414
	I1115 10:19:02.163454  203158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/multinode-560414/id_rsa Username:docker}
	I1115 10:19:02.254787  203158 ssh_runner.go:195] Run: systemctl --version
	I1115 10:19:02.261143  203158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:19:02.272941  203158 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:19:02.329176  203158 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-15 10:19:02.320104019 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:19:02.329757  203158 kubeconfig.go:125] found "multinode-560414" server: "https://192.168.67.2:8443"
	I1115 10:19:02.329789  203158 api_server.go:166] Checking apiserver status ...
	I1115 10:19:02.329822  203158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:19:02.341980  203158 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1361/cgroup
	I1115 10:19:02.350448  203158 api_server.go:182] apiserver freezer: "4:freezer:/docker/4177f503fe0ac8a9ced42e1848105ec2a3bae99c5ea4afd4a98feff057b44bea/crio/crio-3d5ce1ea7f3afb8940c88f2fdf8fa69d36d58f9bc0efbb494782504817ad3e05"
	I1115 10:19:02.350498  203158 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4177f503fe0ac8a9ced42e1848105ec2a3bae99c5ea4afd4a98feff057b44bea/crio/crio-3d5ce1ea7f3afb8940c88f2fdf8fa69d36d58f9bc0efbb494782504817ad3e05/freezer.state
	I1115 10:19:02.357936  203158 api_server.go:204] freezer state: "THAWED"
	I1115 10:19:02.357976  203158 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1115 10:19:02.362984  203158 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1115 10:19:02.363006  203158 status.go:463] multinode-560414 apiserver status = Running (err=<nil>)
	I1115 10:19:02.363017  203158 status.go:176] multinode-560414 status: &{Name:multinode-560414 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 10:19:02.363039  203158 status.go:174] checking status of multinode-560414-m02 ...
	I1115 10:19:02.363281  203158 cli_runner.go:164] Run: docker container inspect multinode-560414-m02 --format={{.State.Status}}
	I1115 10:19:02.380576  203158 status.go:371] multinode-560414-m02 host status = "Running" (err=<nil>)
	I1115 10:19:02.380598  203158 host.go:66] Checking if "multinode-560414-m02" exists ...
	I1115 10:19:02.380845  203158 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-560414-m02
	I1115 10:19:02.398539  203158 host.go:66] Checking if "multinode-560414-m02" exists ...
	I1115 10:19:02.398804  203158 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:19:02.398842  203158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-560414-m02
	I1115 10:19:02.416531  203158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21894-55448/.minikube/machines/multinode-560414-m02/id_rsa Username:docker}
	I1115 10:19:02.507331  203158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:19:02.519382  203158 status.go:176] multinode-560414-m02 status: &{Name:multinode-560414-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1115 10:19:02.519420  203158 status.go:174] checking status of multinode-560414-m03 ...
	I1115 10:19:02.519763  203158 cli_runner.go:164] Run: docker container inspect multinode-560414-m03 --format={{.State.Status}}
	I1115 10:19:02.536809  203158 status.go:371] multinode-560414-m03 host status = "Stopped" (err=<nil>)
	I1115 10:19:02.536835  203158 status.go:384] host is not running, skipping remaining checks
	I1115 10:19:02.536842  203158 status.go:176] multinode-560414-m03 status: &{Name:multinode-560414-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-560414 node start m03 -v=5 --alsologtostderr: (6.315186144s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (72.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-560414
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-560414
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-560414: (25.049742236s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-560414 --wait=true -v=5 --alsologtostderr
E1115 10:20:13.400626   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/functional-169872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-560414 --wait=true -v=5 --alsologtostderr: (47.730703791s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-560414
--- PASS: TestMultiNode/serial/RestartKeepsNodes (72.91s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-560414 node delete m03: (4.659700503s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-560414 stop: (23.834171754s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-560414 status: exit status 7 (100.633563ms)

                                                
                                                
-- stdout --
	multinode-560414
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-560414-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-560414 status --alsologtostderr: exit status 7 (96.286532ms)

                                                
                                                
-- stdout --
	multinode-560414
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-560414-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:20:51.713154  212661 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:20:51.713643  212661 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:20:51.713655  212661 out.go:374] Setting ErrFile to fd 2...
	I1115 10:20:51.713659  212661 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:20:51.713839  212661 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 10:20:51.714025  212661 out.go:368] Setting JSON to false
	I1115 10:20:51.714052  212661 mustload.go:66] Loading cluster: multinode-560414
	I1115 10:20:51.714181  212661 notify.go:221] Checking for updates...
	I1115 10:20:51.714398  212661 config.go:182] Loaded profile config "multinode-560414": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:20:51.714413  212661 status.go:174] checking status of multinode-560414 ...
	I1115 10:20:51.714831  212661 cli_runner.go:164] Run: docker container inspect multinode-560414 --format={{.State.Status}}
	I1115 10:20:51.733966  212661 status.go:371] multinode-560414 host status = "Stopped" (err=<nil>)
	I1115 10:20:51.733991  212661 status.go:384] host is not running, skipping remaining checks
	I1115 10:20:51.733999  212661 status.go:176] multinode-560414 status: &{Name:multinode-560414 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 10:20:51.734018  212661 status.go:174] checking status of multinode-560414-m02 ...
	I1115 10:20:51.734266  212661 cli_runner.go:164] Run: docker container inspect multinode-560414-m02 --format={{.State.Status}}
	I1115 10:20:51.751352  212661 status.go:371] multinode-560414-m02 host status = "Stopped" (err=<nil>)
	I1115 10:20:51.751371  212661 status.go:384] host is not running, skipping remaining checks
	I1115 10:20:51.751377  212661 status.go:176] multinode-560414-m02 status: &{Name:multinode-560414-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.03s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (47.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-560414 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1115 10:21:36.469084   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/functional-169872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-560414 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (46.49416247s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-560414 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (47.09s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (28.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-560414
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-560414-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-560414-m02 --driver=docker  --container-runtime=crio: exit status 14 (75.528146ms)

                                                
                                                
-- stdout --
	* [multinode-560414-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21894
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21894-55448/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-55448/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-560414-m02' is duplicated with machine name 'multinode-560414-m02' in profile 'multinode-560414'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-560414-m03 --driver=docker  --container-runtime=crio
E1115 10:21:57.115095   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-560414-m03 --driver=docker  --container-runtime=crio: (26.312751636s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-560414
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-560414: exit status 80 (290.562009ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-560414 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-560414-m03 already exists in multinode-560414-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-560414-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-560414-m03: (1.974810423s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (28.72s)

                                                
                                    
x
+
TestPreload (127.73s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-140808 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-140808 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (52.969855912s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-140808 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-140808 image pull gcr.io/k8s-minikube/busybox: (5.098600672s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-140808
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-140808: (5.833241132s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-140808 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1115 10:23:54.047276   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-140808 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m1.210232721s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-140808 image list
helpers_test.go:175: Cleaning up "test-preload-140808" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-140808
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-140808: (2.391228432s)
--- PASS: TestPreload (127.73s)

                                                
                                    
x
+
TestScheduledStopUnix (99.7s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-059743 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-059743 --memory=3072 --driver=docker  --container-runtime=crio: (23.877282046s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-059743 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1115 10:24:43.416790  230304 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:24:43.417073  230304 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:24:43.417083  230304 out.go:374] Setting ErrFile to fd 2...
	I1115 10:24:43.417087  230304 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:24:43.417278  230304 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 10:24:43.417511  230304 out.go:368] Setting JSON to false
	I1115 10:24:43.417605  230304 mustload.go:66] Loading cluster: scheduled-stop-059743
	I1115 10:24:43.417924  230304 config.go:182] Loaded profile config "scheduled-stop-059743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:24:43.418029  230304 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/scheduled-stop-059743/config.json ...
	I1115 10:24:43.418215  230304 mustload.go:66] Loading cluster: scheduled-stop-059743
	I1115 10:24:43.418314  230304 config.go:182] Loaded profile config "scheduled-stop-059743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-059743 -n scheduled-stop-059743
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-059743 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1115 10:24:43.799086  230455 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:24:43.799215  230455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:24:43.799225  230455 out.go:374] Setting ErrFile to fd 2...
	I1115 10:24:43.799229  230455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:24:43.799458  230455 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 10:24:43.799681  230455 out.go:368] Setting JSON to false
	I1115 10:24:43.799876  230455 daemonize_unix.go:73] killing process 230339 as it is an old scheduled stop
	I1115 10:24:43.800003  230455 mustload.go:66] Loading cluster: scheduled-stop-059743
	I1115 10:24:43.800357  230455 config.go:182] Loaded profile config "scheduled-stop-059743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:24:43.800433  230455 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/scheduled-stop-059743/config.json ...
	I1115 10:24:43.800604  230455 mustload.go:66] Loading cluster: scheduled-stop-059743
	I1115 10:24:43.800713  230455 config.go:182] Loaded profile config "scheduled-stop-059743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1115 10:24:43.805286   58962 retry.go:31] will retry after 50.906µs: open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/scheduled-stop-059743/pid: no such file or directory
I1115 10:24:43.806485   58962 retry.go:31] will retry after 165.242µs: open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/scheduled-stop-059743/pid: no such file or directory
I1115 10:24:43.807633   58962 retry.go:31] will retry after 209.989µs: open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/scheduled-stop-059743/pid: no such file or directory
I1115 10:24:43.808796   58962 retry.go:31] will retry after 400.29µs: open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/scheduled-stop-059743/pid: no such file or directory
I1115 10:24:43.809927   58962 retry.go:31] will retry after 386.966µs: open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/scheduled-stop-059743/pid: no such file or directory
I1115 10:24:43.811026   58962 retry.go:31] will retry after 870.56µs: open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/scheduled-stop-059743/pid: no such file or directory
I1115 10:24:43.812155   58962 retry.go:31] will retry after 763.981µs: open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/scheduled-stop-059743/pid: no such file or directory
I1115 10:24:43.813296   58962 retry.go:31] will retry after 2.520638ms: open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/scheduled-stop-059743/pid: no such file or directory
I1115 10:24:43.816489   58962 retry.go:31] will retry after 3.075069ms: open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/scheduled-stop-059743/pid: no such file or directory
I1115 10:24:43.819650   58962 retry.go:31] will retry after 2.533348ms: open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/scheduled-stop-059743/pid: no such file or directory
I1115 10:24:43.822811   58962 retry.go:31] will retry after 5.895796ms: open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/scheduled-stop-059743/pid: no such file or directory
I1115 10:24:43.829023   58962 retry.go:31] will retry after 8.316108ms: open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/scheduled-stop-059743/pid: no such file or directory
I1115 10:24:43.838254   58962 retry.go:31] will retry after 8.732021ms: open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/scheduled-stop-059743/pid: no such file or directory
I1115 10:24:43.847724   58962 retry.go:31] will retry after 13.991525ms: open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/scheduled-stop-059743/pid: no such file or directory
I1115 10:24:43.862011   58962 retry.go:31] will retry after 27.461914ms: open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/scheduled-stop-059743/pid: no such file or directory
I1115 10:24:43.890266   58962 retry.go:31] will retry after 41.242548ms: open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/scheduled-stop-059743/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-059743 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-059743 -n scheduled-stop-059743
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-059743
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-059743 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1115 10:25:09.699337  231095 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:25:09.699618  231095 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:25:09.699630  231095 out.go:374] Setting ErrFile to fd 2...
	I1115 10:25:09.699635  231095 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:25:09.699820  231095 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 10:25:09.700071  231095 out.go:368] Setting JSON to false
	I1115 10:25:09.700154  231095 mustload.go:66] Loading cluster: scheduled-stop-059743
	I1115 10:25:09.700484  231095 config.go:182] Loaded profile config "scheduled-stop-059743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:25:09.700549  231095 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/scheduled-stop-059743/config.json ...
	I1115 10:25:09.700728  231095 mustload.go:66] Loading cluster: scheduled-stop-059743
	I1115 10:25:09.700822  231095 config.go:182] Loaded profile config "scheduled-stop-059743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
E1115 10:25:13.403395   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/functional-169872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-059743
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-059743: exit status 7 (81.24604ms)

                                                
                                                
-- stdout --
	scheduled-stop-059743
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-059743 -n scheduled-stop-059743
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-059743 -n scheduled-stop-059743: exit status 7 (78.210942ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-059743" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-059743
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-059743: (4.310896823s)
--- PASS: TestScheduledStopUnix (99.70s)

                                                
                                    
x
+
TestInsufficientStorage (10.2s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-262515 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-262515 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.698728875s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"473ce007-dbed-4348-9ff3-08d11f363945","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-262515] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0f1b1e8c-5925-4439-a8cc-c819ab415c5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21894"}}
	{"specversion":"1.0","id":"806fd469-8bb0-4c39-8a06-3ea5eb2e2ec2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1e595dfb-a52c-46d1-88a0-87b6a781a709","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21894-55448/kubeconfig"}}
	{"specversion":"1.0","id":"da943886-01c6-4660-bb81-0a23b07b7fd2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-55448/.minikube"}}
	{"specversion":"1.0","id":"103d4800-91e6-4b21-be9b-58d479f1fe8b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"5fd602fd-f266-4f2b-b1b5-e6fcbe67b061","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ca84f7f4-9480-4ce6-8a4d-75b86646f310","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"ec5449cb-e65b-47a9-8565-76ae58b85c50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"418a6ffe-537a-4ab4-b27d-33f36d84c1e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"d4bbfe91-c3e2-4dcc-8da8-7640819bfb4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"598b4939-1a85-4515-af99-8691b03ef711","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-262515\" primary control-plane node in \"insufficient-storage-262515\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"56a8afe9-02b9-40fa-86e0-47c0f3c99232","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1761985721-21837 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"1b60dd04-7828-4b30-b8ef-a9acf81f6c53","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"07fa0c99-b651-4eed-99b5-b08a78642995","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-262515 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-262515 --output=json --layout=cluster: exit status 7 (283.769655ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-262515","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-262515","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1115 10:26:07.144722  233594 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-262515" does not appear in /home/jenkins/minikube-integration/21894-55448/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-262515 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-262515 --output=json --layout=cluster: exit status 7 (285.260681ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-262515","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-262515","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1115 10:26:07.430716  233703 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-262515" does not appear in /home/jenkins/minikube-integration/21894-55448/kubeconfig
	E1115 10:26:07.441126  233703 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/insufficient-storage-262515/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-262515" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-262515
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-262515: (1.934446674s)
--- PASS: TestInsufficientStorage (10.20s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (44.93s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.4083794028 start -p running-upgrade-188012 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.4083794028 start -p running-upgrade-188012 --memory=3072 --vm-driver=docker  --container-runtime=crio: (25.783914091s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-188012 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-188012 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (13.544987975s)
helpers_test.go:175: Cleaning up "running-upgrade-188012" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-188012
E1115 10:30:13.401080   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/functional-169872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-188012: (2.068086002s)
--- PASS: TestRunningBinaryUpgrade (44.93s)

                                                
                                    
x
+
TestKubernetesUpgrade (335.22s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-914881 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-914881 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (25.490831747s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-914881
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-914881: (4.258440333s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-914881 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-914881 status --format={{.Host}}: exit status 7 (96.74983ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-914881 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-914881 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m32.279678548s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-914881 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-914881 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-914881 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (79.608253ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-914881] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21894
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21894-55448/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-55448/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-914881
	    minikube start -p kubernetes-upgrade-914881 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9148812 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-914881 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-914881 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-914881 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.790177525s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-914881" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-914881
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-914881: (2.154794041s)
--- PASS: TestKubernetesUpgrade (335.22s)

                                                
                                    
x
+
TestMissingContainerUpgrade (125.69s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1151486953 start -p missing-upgrade-229925 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1151486953 start -p missing-upgrade-229925 --memory=3072 --driver=docker  --container-runtime=crio: (1m18.694019181s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-229925
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-229925
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-229925 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-229925 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.444069104s)
helpers_test.go:175: Cleaning up "missing-upgrade-229925" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-229925
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-229925: (2.022155789s)
--- PASS: TestMissingContainerUpgrade (125.69s)

                                                
                                    
x
+
TestPause/serial/Start (80.06s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-642487 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-642487 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m20.059051137s)
--- PASS: TestPause/serial/Start (80.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-855068 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-855068 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (102.591187ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-855068] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21894
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21894-55448/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-55448/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (38.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-855068 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-855068 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.987623694s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-855068 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (38.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-855068 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-855068 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (3.094880274s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-855068 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-855068 status -o json: exit status 2 (365.978018ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-855068","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-855068
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-855068: (2.08992628s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (5.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-931243 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-931243 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (187.487022ms)

                                                
                                                
-- stdout --
	* [false-931243] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21894
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21894-55448/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-55448/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:26:48.803387  245856 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:26:48.803721  245856 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:26:48.803737  245856 out.go:374] Setting ErrFile to fd 2...
	I1115 10:26:48.803745  245856 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:26:48.804001  245856 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-55448/.minikube/bin
	I1115 10:26:48.804491  245856 out.go:368] Setting JSON to false
	I1115 10:26:48.805689  245856 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":7746,"bootTime":1763194663,"procs":277,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 10:26:48.805792  245856 start.go:143] virtualization: kvm guest
	I1115 10:26:48.807686  245856 out.go:179] * [false-931243] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 10:26:48.809905  245856 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 10:26:48.809933  245856 notify.go:221] Checking for updates...
	I1115 10:26:48.812504  245856 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:26:48.814092  245856 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-55448/kubeconfig
	I1115 10:26:48.815319  245856 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-55448/.minikube
	I1115 10:26:48.816551  245856 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 10:26:48.817800  245856 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:26:48.819489  245856 config.go:182] Loaded profile config "NoKubernetes-855068": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1115 10:26:48.819612  245856 config.go:182] Loaded profile config "offline-crio-637291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:26:48.819717  245856 config.go:182] Loaded profile config "pause-642487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:26:48.819835  245856 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:26:48.846425  245856 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 10:26:48.846507  245856 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:26:48.915248  245856 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:76 SystemTime:2025-11-15 10:26:48.902115747 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:26:48.915359  245856 docker.go:319] overlay module found
	I1115 10:26:48.917796  245856 out.go:179] * Using the docker driver based on user configuration
	I1115 10:26:48.919861  245856 start.go:309] selected driver: docker
	I1115 10:26:48.919882  245856 start.go:930] validating driver "docker" against <nil>
	I1115 10:26:48.919897  245856 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:26:48.922387  245856 out.go:203] 
	W1115 10:26:48.925885  245856 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1115 10:26:48.927095  245856 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-931243 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-931243

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-931243

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-931243

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-931243

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-931243

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-931243

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-931243

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-931243

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-931243

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-931243

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931243"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931243"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931243"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-931243

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931243"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931243"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-931243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-931243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-931243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-931243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-931243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-931243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-931243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-931243" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931243"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931243"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931243"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931243"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931243"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-931243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-931243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-931243" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931243"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931243"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931243"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931243"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931243"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 15 Nov 2025 10:26:46 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: NoKubernetes-855068
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 15 Nov 2025 10:26:50 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: offline-crio-637291
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 15 Nov 2025 10:26:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-642487
contexts:
- context:
cluster: NoKubernetes-855068
extensions:
- extension:
last-update: Sat, 15 Nov 2025 10:26:46 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-855068
name: NoKubernetes-855068
- context:
cluster: offline-crio-637291
extensions:
- extension:
last-update: Sat, 15 Nov 2025 10:26:50 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: offline-crio-637291
name: offline-crio-637291
- context:
cluster: pause-642487
extensions:
- extension:
last-update: Sat, 15 Nov 2025 10:26:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-642487
name: pause-642487
current-context: offline-crio-637291
kind: Config
users:
- name: NoKubernetes-855068
user:
client-certificate: /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/NoKubernetes-855068/client.crt
client-key: /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/NoKubernetes-855068/client.key
- name: offline-crio-637291
user:
client-certificate: /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/offline-crio-637291/client.crt
client-key: /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/offline-crio-637291/client.key
- name: pause-642487
user:
client-certificate: /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/pause-642487/client.crt
client-key: /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/pause-642487/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-931243

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931243"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931243"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931243"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931243"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931243"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931243"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931243"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931243"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931243"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931243"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931243"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931243"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931243"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931243"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931243"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931243"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931243"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931243"

                                                
                                                
----------------------- debugLogs end: false-931243 [took: 3.695016525s] --------------------------------
helpers_test.go:175: Cleaning up "false-931243" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-931243
--- PASS: TestNetworkPlugins/group/false (4.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-855068 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-855068 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (7.515332688s)
--- PASS: TestNoKubernetes/serial/Start (7.52s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.83s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (113.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2533635044 start -p stopped-upgrade-567029 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2533635044 start -p stopped-upgrade-567029 --memory=3072 --vm-driver=docker  --container-runtime=crio: (1m37.453843382s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2533635044 -p stopped-upgrade-567029 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2533635044 -p stopped-upgrade-567029 stop: (1.253689928s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-567029 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-567029 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (14.450394008s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (113.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21894-55448/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-855068 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-855068 "sudo systemctl is-active --quiet service kubelet": exit status 1 (280.204432ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-855068
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-855068: (1.303201536s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-855068 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-855068 --driver=docker  --container-runtime=crio: (8.871951193s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-855068 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-855068 "sudo systemctl is-active --quiet service kubelet": exit status 1 (298.423161ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (27.36s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-642487 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-642487 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.337380304s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (27.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-567029
E1115 10:28:54.047236   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (74.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-931243 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-931243 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m14.956255443s)
--- PASS: TestNetworkPlugins/group/auto/Start (74.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (71.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-931243 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-931243 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m11.815171243s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (71.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-931243 "pgrep -a kubelet"
I1115 10:31:03.995764   58962 config.go:182] Loaded profile config "auto-931243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-931243 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-k4k55" [e94a9968-040e-4b4d-a7ac-3f6383d876cf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-k4k55" [e94a9968-040e-4b4d-a7ac-3f6383d876cf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003852582s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-931243 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-931243 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-931243 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-fvdbb" [c4d90c42-0221-4530-83f5-1c258c20b0d6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003288187s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-931243 "pgrep -a kubelet"
I1115 10:31:32.214108   58962 config.go:182] Loaded profile config "kindnet-931243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-931243 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-z8lxf" [f2646952-203b-4e88-bb12-78ad11c741a6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-z8lxf" [f2646952-203b-4e88-bb12-78ad11c741a6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003514044s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (53.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-931243 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-931243 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (53.435642814s)
--- PASS: TestNetworkPlugins/group/calico/Start (53.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-931243 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-931243 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-931243 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (60.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-931243 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-931243 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m0.284977076s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (60.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (39.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-931243 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-931243 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (39.22607666s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (39.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-qgl8k" [81f0b85a-2507-481b-b7f6-49e5c6d9328b] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-qgl8k" [81f0b85a-2507-481b-b7f6-49e5c6d9328b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00455081s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-931243 "pgrep -a kubelet"
I1115 10:32:32.027642   58962 config.go:182] Loaded profile config "calico-931243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-931243 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2kwqc" [51099a55-1d0d-4c3d-b5f4-e87baf453169] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2kwqc" [51099a55-1d0d-4c3d-b5f4-e87baf453169] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.00386759s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-931243 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-931243 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-931243 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-931243 "pgrep -a kubelet"
I1115 10:32:44.988194   58962 config.go:182] Loaded profile config "enable-default-cni-931243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-931243 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gspdt" [8a3c8b14-55f3-4600-b950-936c01ce8508] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-gspdt" [8a3c8b14-55f3-4600-b950-936c01ce8508] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003643776s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-931243 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-931243 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-931243 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-931243 "pgrep -a kubelet"
I1115 10:33:00.187427   58962 config.go:182] Loaded profile config "custom-flannel-931243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-931243 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jfkj9" [6c33b5df-f8dc-4039-9cc1-090645842b1f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jfkj9" [6c33b5df-f8dc-4039-9cc1-090645842b1f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003619755s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (54.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-931243 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-931243 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (54.045270644s)
--- PASS: TestNetworkPlugins/group/flannel/Start (54.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-931243 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-931243 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-931243 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (67.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-931243 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-931243 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m7.32825022s)
--- PASS: TestNetworkPlugins/group/bridge/Start (67.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (57.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-087235 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-087235 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (57.900934378s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (57.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (58.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-283677 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1115 10:33:54.047151   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/addons-209049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-283677 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (58.138513325s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (58.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-h94fl" [cd715138-2824-46a6-a281-e481310356cd] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003957876s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-931243 "pgrep -a kubelet"
I1115 10:34:03.207383   58962 config.go:182] Loaded profile config "flannel-931243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-931243 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2b9vk" [18ed4e51-d4e7-41b6-8519-6c73a4c7278d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2b9vk" [18ed4e51-d4e7-41b6-8519-6c73a4c7278d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004199375s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-931243 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-931243 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-931243 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-087235 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [99afc046-339f-4b7b-a19f-e6b0a2bbf831] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [99afc046-339f-4b7b-a19f-e6b0a2bbf831] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.004332451s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-087235 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-931243 "pgrep -a kubelet"
I1115 10:34:20.935021   58962 config.go:182] Loaded profile config "bridge-931243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-931243 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gfpsr" [bf539d58-c7da-4161-8ded-4b9a201f8698] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-gfpsr" [bf539d58-c7da-4161-8ded-4b9a201f8698] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004765286s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-087235 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-087235 --alsologtostderr -v=3: (12.214206984s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-931243 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-283677 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [ddb2a962-6824-4e90-abdf-1404de5921dc] Pending
helpers_test.go:352: "busybox" [ddb2a962-6824-4e90-abdf-1404de5921dc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [ddb2a962-6824-4e90-abdf-1404de5921dc] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.00348194s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-283677 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-931243 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-931243 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)
E1115 10:36:14.443472   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/auto-931243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (49.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-719574 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-719574 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (49.837254767s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (49.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-087235 -n old-k8s-version-087235
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-087235 -n old-k8s-version-087235: exit status 7 (93.171282ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-087235 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (49.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-087235 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-087235 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (49.524371s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-087235 -n old-k8s-version-087235
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (49.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-283677 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-283677 --alsologtostderr -v=3: (12.766994394s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (72.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-026691 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-026691 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m12.011439789s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (72.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-283677 -n no-preload-283677
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-283677 -n no-preload-283677: exit status 7 (100.744777ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-283677 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (55.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-283677 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1115 10:35:13.400703   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/functional-169872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-283677 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (55.563411128s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-283677 -n no-preload-283677
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (55.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-719574 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [dc1f55ba-efa8-4823-b9a0-0c2cd11a020d] Pending
helpers_test.go:352: "busybox" [dc1f55ba-efa8-4823-b9a0-0c2cd11a020d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [dc1f55ba-efa8-4823-b9a0-0c2cd11a020d] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.00357309s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-719574 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-sh86n" [cdb69a62-a600-4d3b-aaec-535c3b64028f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0031596s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-719574 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-719574 --alsologtostderr -v=3: (12.185608519s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-sh86n" [cdb69a62-a600-4d3b-aaec-535c3b64028f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003245184s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-087235 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-087235 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-719574 -n embed-certs-719574
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-719574 -n embed-certs-719574: exit status 7 (100.631562ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-719574 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (49.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-719574 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-719574 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (49.1641862s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-719574 -n embed-certs-719574
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (49.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (33.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-086099 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-086099 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (33.498125818s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (33.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2q95v" [7bc6549a-cd98-4cc6-a665-0ae12bc46067] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004226566s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2q95v" [7bc6549a-cd98-4cc6-a665-0ae12bc46067] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004506148s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-283677 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-283677 image list --format=json
E1115 10:36:04.189604   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/auto-931243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:36:04.195986   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/auto-931243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:36:04.207405   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/auto-931243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:36:04.228810   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/auto-931243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:36:04.270160   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/auto-931243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-026691 create -f testdata/busybox.yaml
E1115 10:36:04.352001   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/auto-931243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8e2e3c26-b883-4c84-b07b-e107e5b36bbc] Pending
E1115 10:36:04.513639   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/auto-931243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:36:04.835774   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/auto-931243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:36:05.478053   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/auto-931243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [8e2e3c26-b883-4c84-b07b-e107e5b36bbc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8e2e3c26-b883-4c84-b07b-e107e5b36bbc] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 12.003716067s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-026691 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-026691 --alsologtostderr -v=3
E1115 10:36:24.685518   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/auto-931243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-026691 --alsologtostderr -v=3: (12.057217494s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-086099 --alsologtostderr -v=3
E1115 10:36:28.454542   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kindnet-931243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-086099 --alsologtostderr -v=3: (1.335938585s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-086099 -n newest-cni-086099
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-086099 -n newest-cni-086099: exit status 7 (81.685702ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-086099 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (13.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-086099 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-086099 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (12.686627273s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-086099 -n newest-cni-086099
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (13.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-026691 -n default-k8s-diff-port-026691
E1115 10:36:31.016209   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kindnet-931243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-026691 -n default-k8s-diff-port-026691: exit status 7 (80.755021ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-026691 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (53.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-026691 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1115 10:36:36.138228   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/kindnet-931243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-026691 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (53.030879037s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-026691 -n default-k8s-diff-port-026691
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (53.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-tj9l5" [b09cc532-134b-4839-a993-65f3967000b8] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004093308s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-086099 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-tj9l5" [b09cc532-134b-4839-a993-65f3967000b8] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003622157s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-719574 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-719574 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-lnfbf" [230beb1a-4842-4cb2-b64f-07d59686ef2c] Running
E1115 10:37:25.722595   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/calico-931243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:37:25.728977   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/calico-931243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:37:25.740385   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/calico-931243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:37:25.761796   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/calico-931243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:37:25.803286   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/calico-931243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:37:25.884897   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/calico-931243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:37:26.046486   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/calico-931243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:37:26.131210   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/auto-931243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:37:26.367914   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/calico-931243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:37:27.009742   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/calico-931243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:37:28.291334   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/calico-931243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003384765s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-lnfbf" [230beb1a-4842-4cb2-b64f-07d59686ef2c] Running
E1115 10:37:30.853257   58962 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/calico-931243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002733442s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-026691 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-026691 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    

Test skip (27/328)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-931243 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-931243

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-931243

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-931243

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-931243

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-931243

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-931243

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-931243

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-931243

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-931243

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-931243

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931243"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931243"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931243"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-931243

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931243"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931243"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-931243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-931243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-931243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-931243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-931243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-931243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-931243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-931243" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931243"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931243"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931243"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931243"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931243"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-931243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-931243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-931243" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931243"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931243"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931243"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931243"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931243"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 15 Nov 2025 10:26:46 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: NoKubernetes-855068
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 15 Nov 2025 10:26:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-642487
contexts:
- context:
cluster: NoKubernetes-855068
extensions:
- extension:
last-update: Sat, 15 Nov 2025 10:26:46 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-855068
name: NoKubernetes-855068
- context:
cluster: pause-642487
extensions:
- extension:
last-update: Sat, 15 Nov 2025 10:26:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-642487
name: pause-642487
current-context: NoKubernetes-855068
kind: Config
users:
- name: NoKubernetes-855068
user:
client-certificate: /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/NoKubernetes-855068/client.crt
client-key: /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/NoKubernetes-855068/client.key
- name: pause-642487
user:
client-certificate: /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/pause-642487/client.crt
client-key: /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/pause-642487/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-931243

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931243"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931243"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931243"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931243"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931243"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931243"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931243"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931243"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931243"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931243"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931243"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931243"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931243"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931243"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931243"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931243"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931243"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931243"

                                                
                                                
----------------------- debugLogs end: kubenet-931243 [took: 3.720995551s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-931243" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-931243
--- SKIP: TestNetworkPlugins/group/kubenet (3.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-931243 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-931243

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-931243

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-931243

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-931243

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-931243

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-931243

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-931243

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-931243

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-931243

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-931243

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931243"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931243"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931243"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-931243

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931243"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931243"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-931243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-931243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-931243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-931243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-931243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-931243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-931243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-931243" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931243"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931243"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931243"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931243"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931243"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-931243

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-931243

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-931243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-931243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-931243

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-931243

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-931243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-931243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-931243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-931243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-931243" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931243"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931243"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931243"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931243"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931243"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 15 Nov 2025 10:26:50 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: offline-crio-637291
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21894-55448/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 15 Nov 2025 10:26:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-642487
contexts:
- context:
cluster: offline-crio-637291
extensions:
- extension:
last-update: Sat, 15 Nov 2025 10:26:50 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: offline-crio-637291
name: offline-crio-637291
- context:
cluster: pause-642487
extensions:
- extension:
last-update: Sat, 15 Nov 2025 10:26:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-642487
name: pause-642487
current-context: offline-crio-637291
kind: Config
users:
- name: offline-crio-637291
user:
client-certificate: /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/offline-crio-637291/client.crt
client-key: /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/offline-crio-637291/client.key
- name: pause-642487
user:
client-certificate: /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/pause-642487/client.crt
client-key: /home/jenkins/minikube-integration/21894-55448/.minikube/profiles/pause-642487/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-931243

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931243"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931243"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931243"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931243"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931243"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931243"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931243"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931243"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931243"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931243"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931243"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931243"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931243"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931243"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931243"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931243"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931243"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-931243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931243"

                                                
                                                
----------------------- debugLogs end: cilium-931243 [took: 3.788505016s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-931243" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-931243
--- SKIP: TestNetworkPlugins/group/cilium (3.95s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-435527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-435527
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
Copied to clipboard